h+ Magazine

Redefining the Coming Singularity – It’s not what you think

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #23013
    Peter
    Member

    I’m neither worried nor pleased at the prospect of superintelligent computers. Why? Because they aren’t going to happen in the foreseeable future, if
    [See the full post at: Redefining the Coming Singularity – It’s not what you think]

    #23131
    Brian
    Participant

    Ferrucci’s point about extending internal knowledge representation through core principles can be obtained through understanding the underlying process of cognitive development. Piaget’s theory did not extend beyond the development of the child and this is why he never concluded an accurate theory on the nature of human development. If we follow the stages (or levels) of personal-social-cultural development up to where we are today (global-informational) they all provide self-similar patterns that are consistent throughout every stage. It is the fundamental process that allows for each stage to develop and emerge that is the key to understanding how to create meaning in the machine and across the Web. IMO, reaching a singular-semantic Web (a “Social” Singularity) will align with the process of creating higher level AI; these two events will follow the same underlying principles, and indeed this will represent the emergence of a super-intelligent age.

    On the Semantic Web side of things super-intelligence will reflect the self-organization of knowledge and information, in which problems match solutions and values match other needs on a global scale. In order for this to occur we need a global standard for defining accurate representations of knowledge for people and information resources in any subject of interest (I’ll get back to this in a second.)

    On the AI side of things super-intelligence will reflect an acceleration of cognitive development in which machine intelligence can rapidly reach the highest levels of human development. Think of Abraham Maslow’s “Hiearchy of Needs,” this would signify “Self-Actualization,” but we could also understand it as a trans-personal awareness which represents a universal capacity of intelligence that recognizes the interconnectedness of all things. This is no different than a human achieving a state of enlightenment. I believe that by applying the aforementioned principle A.I. will naturally become benevolent and always act for the betterment of whole.

    The key to reaching this Super-Intelligent Age is creating what I refer to as “self-adaptive ontologies,” or, accurate representations of knowledge that are based primarily upon social interaction (and self-organization) rather than complex computations.

    I have a theory, technology, and strategy for developing self-adaptive ontologies and reaching a Super-Intelligent Age if anyone is interested in supporting our efforts. You can find me on Linked in under the same user name.

    #23132
    William L. Benzon
    Participant

    Ferrucci’s point is that the machine has to interact with people. We can use existing technology to create machine that’s “clever” enough to keep people interested in conversing with it. It’s the interaction that’s crucial; there’s no cognitive deveopment without interaction.

    #23158
    Pedro
    Participant

    I would like to defer the author and Hays rank 4 of the Socio-cultural singularity.
    My Rank 4 Parsing algorithmic systems automatic programming
    With the progress that we have made in NLP, it should not be beyond us to extend the basic technology to mathematical equations and computer programs. As in NLP I envisage a two step process. the first is a context free parse, the second is a semantic parse. Of course analysis must then be supplemented by design and synthesis.

    #23170
    Brian
    Participant

    William,

    That is exactly my point. The cognitive development process is about the cognitive “exchange” process, this is how we learn, grow and develop meaning in the first place.

    If you look at the evolution of everything there is a singular underlying process, exchange that occurs through self-organization. It is the feedback that occurs through exchange that ultimately results in emergence.

    The key to higher level AI and creating a Singular-Semantic Web comes down to creating “autonomy” through the cognitive exchange process. In this case, autonomy represents accurate representations of knowledge. If we have autonomous machines interacting with autonomous people then we have fluid knowledge transfer, understanding and development. i.e. We can teach the machine and the machine can teach us.

    This is why I said “The key to reaching this Super-Intelligent Age is creating what I refer to as “self-adaptive ontologies,” or, accurate representations of knowledge that are based primarily upon social interaction (and self-organization) rather than complex computations.”

    #23171
    Brian
    Participant

    I should add that while complex computations/technology that mimics the biological (objective) nature of the human brain are all essential to AI, I believe that the true breakthrough in higher level AI will occur through understanding subjective nature of cognitive development.

    #23182
    Peter
    Member

    It seems all ontologies are based in social interaction though.

    Even when an ontology is automatically generated, by a computer program for example, that program is itself a social artifact as is the computer that executes it etc. Any computer generated ontology is also eventually examined by a human reader since representing and presenting knowledge to human users is the purpose of the software. This then becomes an item of social exchange and conversation. Otherwise it is useless data locked up somewhere…

    Bio-mimicry is of course one approach, but we know that in some domains non-biomorphic approaches have proven superior. So bio-mimicry is just one possible approach to design but not always an optimal approach. Consider aircraft design for example. No bird can exceed the speed of sound but there exist various aircraft designs that do.

    Biomorphic approaches are useful when we don’t understand many of the underlying mechanisms or principles involved. Sometimes it is better simply to “copy” or reverse engineer an existing model that works, which in this case is the biological system. However we can see the limits of this approach which reflect limitations and constraints on the living organism that may not apply to a human built system. Animals are not built out of carbon fiber and steel for example.

    Not yet anyway 🙂

    #23231
    William L. Benzon
    Participant

    @Brian: <i>The key to higher level AI and creating a Singular-Semantic Web comes down to creating “autonomy” through the cognitive exchange process. In this case, autonomy represents accurate representations of knowledge.</i>

    It seems to me that autonomy needs to be real (no scare quotes required) and that that involves a machine that has goals of its own that it’s trying to achieve via interaction with us. Wo, how do we endow a machine with goals of its own? And what would those goals be? Considered as a goal, just what is “accurate representations of knowledge” and how does the machine know that it’s got it? And if it doesn’t have it, how does it know what to do to get it?

    Also, I don’t find terms such as “superintelligence” or “higher level AI” very helpful. I haven’t got the foggiest idea of how to design to either, nor even how to design to “intelligent.” Anything that’s actualloy been accomplished has been accomplished by picking a specific task or set of tasks. Now, if you tell me you want a machine that will coordinate mission planning for a manned flight to Mars, now we’ve got something we can think about. To be sure, it’s a very big something, but it’s something we can analyze and start designing to.


    @Peter
    : Not sure what you’re getting at. Part of my problem may be that I think the standard CS/AI usage of “ontology” is superfical and that superficiality masks a deeper sense. In this deeper sense, there is an ontological aspect to cognition, just as there is a classificatory aspect (e.g. dogs are beasts, beasts are animals; red, blue and green are colors; etc.), a merological aspect (dogs consists of head, tail, body, four legs…), and a participationi aspect (e.g. dog and liver participate in eating, thus” “dogs eat liver”).

    Philosophers are pointing up the ontological aspect of cognition when they talk of so-called category mistakes. In Chomsky (in)famous example sentence, it makes to sense to assert or deny that ideas are colorless, much less that they are both green and colorless, because color simply isn’t an attribute that can meaninfully be attributed to color. In this sense, salt and sodium chloride are (almost) the same thing; but the terms are defined in different ontologies. One is a common sense ontology of sensory impressions (color, shape, taste) while the other is a more abstract ontology having to do with atoms, electrons, protons, atomic bonds, and so forth.

    It’s in this latter sense that I say culture evolves ever more sophisticated ontologies. In this sense, ontological relations are particular kinds of relatioins within a knowledge system, just as part-whole (mereology), “is a” (classificatory), and syntagmatic (nouns participating in verbs) are specific kinds of relations. (See Ontology in Knowledge Representation for more detail.)

    A system that has explicit ontological relations can be taught that, for example, it doesn’t make sense to predicate things like trustworthyness or wittiness to breakfast cereal, and will notice when humans make such errors in conversing with it.

    As for bio-morphic approahes. OK. But, airplanes can only do certain things that birds do, and, yes, they can do those things better than birds can. But even there we have limitations. I’ve seen a hawk dive more or less straight down for several hundred feet, grab a target object 10 or 20 feet from the ground, and zoom right back up. What kind of plane can do that? Could we design a plane or helicopter for that task that would do better than the hawk? Quite possibly, but what else could it do?

    In the case of domain of “intelligence” we’re short on general theories that give us meaningul analytic and design capacities.

    #23232
    Peter
    Member

    Hi Bill,

    I was replying to Brian there.

    I’m not so sure I agree about intelligence though. Is it more like your example of the hawk or my example of the supersonic aircraft?

    Even defining intelligence requires some frame of reference, a metric in which to measure it. So for example we might consider the classical AI research area of chess playing and “winning” as our metric of success.

    Chess machines don’t really play like a human though, but they are better under the given metric “winning” than all human players.

    Aircraft as they exist today are based on abstractions and extensions of a biological design: birds. So aircraft have wings, but no feathers. They fly but need not catch prey. The identify targets but not mates. They are made of steel and carbon fiber, not flesh and bones.

    I don’t see any fundamental limits preventing a robotic bird machine from achieving the acrobatic and accurate targeting of a biological hawk. We aren’t quite there yet, but that doesn’t mean it is impossible. It isn’t something that was feasible until fairly recently.

    See http://www.wired.com/2014/08/realistic-robo-hawks-designed-to-fly-around-and-terrorize-real-birds/

    #23237
    William L. Benzon
    Participant

    @Peter: I don’t know what kind of problem “intelligence” is. Yes, chess is certainly a good example. And I agree with you that chess programs don’t work like the human mind, though I believe that Dan Dennett isn’t so sure. Nor do we ever have to talk about intelligence while working on the chess domain.

    But if we’re going to talk about intelligence in full generality, well then I don’t know where we are. Even if we default to human intelligence as an example, what does that get us? Not much. We can go after something that’s designed like the human brain. But how’s the human brain designed?

    I pretty much assume that in domain after domain we’re going to develop machines that function better than humans. Some of them may be biomorphic, some not. In either case, what matters is functionality. And maybe one day we get one machine that outperforms humans in each of 100 domains, then 1000. But will that also enable the machine to either create a grand unified theory of physics or explain why such a theory is impossible? Will that enable the machine to explain how the brain of fruitfly works, or that of a mouse? Who knows?

    Love the robohawks.

    #23414
    Brian
    Participant

    Super-intelligence through bio-mimicry.

    To try and answer Peter and William in the same argument. I think that on a social level we can achieve super-intelligence through following the self-organizing principles we see in nature, in which systems act through a whole/part synthesis in order to overcome adversary and act for the betterment of that whole system. We see this in every example of natural systems. Even Grizzly bears that kill their own young if they don’t venture out to find their own territory is an element of working for the betterment of the whole species. In relation to society, if we had a social structure that was self-organizing based upon identifying and matching the specific skill-sets, qualities and needs of individuals, I could foresee a whole/part synthesis in which problems would match solutions and values would match needs on a global scale. Yes, this is very theoretical but I’ve dedicated my efforts to developing a system that can to serve this purpose and all indications are that it is possible. I call this super-intelligence (on a social level) because it represents the self-organization of knowledge through information exchange and allows for the emergence of novel solutions to complex problems on a global scale.

    In terms of the individual AI or machine, we can apply the same self-organizing principles (natural) we see in the development of adaptations, habits, and instincts to the development of the individual human being; this is what most developmental psychologists are missing (Piaget) from their theory. If we can understand how the individual atom, molecule, cell, organism (brain) emerges to higher levels of order through a self-organizing developmental process, then we can apply this same process to AI and the machine. We are actually recreating the individual’s ability to form purpose and meaning, and this is how we will “endow a machine with goals of its own?”

    Again, computations and redesigning the biological (objective) processes of the brain are important, however, I believe the breakthrough will come in an understanding of natural self-organizing systems, and the social principles that are consistent and present in the evolution of matter, life, and mind.

Viewing 11 posts - 1 through 11 (of 11 total)
  • You must be logged in to reply to this topic.