Future Foglets of the Hive Mind

The concept of utility fog – flying, intercommunicating nanomachines that dynamically shape themselves into assorted configurations to serve various roles and execute multifarious tasks – was introduced by nanotech pioneer J. Storrs Hall in 1993. Recently in H+ Magazine, Hall pointed out that swarm robots are the closest thing we have to utility fog. This brings the concept a little bit closer to reality.

For instance, a few years ago Dr. James McLurkin of the Massachusetts Institute of Technology (MIT) demonstrated 112 swarm robots at the Idea Fest in Louisville Kentucky. They communicated with one another and operated as a cohesive unit to complete their tasks. Currently, some swarm robots can even self-assemble and self-replicate. These precursors to future foglets measure about 4.5 inches in diameter – a far cry from the nanoscale, but nevertheless demonstrating some scale-independent principles of collective intelligence. These kinds of swarm robots may be seen as early steps toward the creation of utility foglets. In time, it will become possible for self-replicating robots to be built on the scale of nanoparticles, even as their intelligence is increased to carry out missions which humans are either unwilling or unable to perform themselves.

However, if a future foglet ever became conscious enough to dissent from its assigned task and spread new information to the hive mind, this might cause other constituent foglets to deviate from their assigned tasks. This could result in various undesirable consequences, maybe even the much-hyped scenario in which rampant nanotech turns the world into some sort of “grey goo.”

Eric Drexler, who coined “grey goo” in his seminal 1986 work on nanotechnology, “Engines of Creation,” now resents the term’s spread since it is often used to conjure up fears of a nanotech-inspired apocalypse. However, thinking about the consequences of a radical new technology like utility fogs is useful when considering the creation of foglets from an ethical standpoint. The notion of robots that are programmed to obey us blindly – like foglets in a utility fog – should impel researchers to ponder the moral justification of creating sentient life which cannot exercise freedom.

Should we attempt to create artificially generated intelligence (AGI) in a manner that resembles what we would wish for ourselves? An intelligent creator would not allow his creatures to suffer unless he was a sadist or at the very least, cruelly indifferent. Since humans will be the creators of utility fog, we should at least try to imagine what the future holds in store for foglets. In order to prevent our creations from suffering, we may need to enact a code of conduct which examines the ethics of creating artificial intelligence (AI). Such laws will need to be written from the perspectives of both the creations and the creators.

What Is It Like to Be a Foglet?

Is it ridiculous to worry about the subjective experience of utility foglets? It seems not, because their intelligent, adaptive capability may come along with a commensurately rich inner experience. In order for artificial life to be considered intelligent, it must in some sense be aware of its environment and learn how to interact with it. While the philosophy of consciousness is a subtle matter, it seems reasonable to propose that there is no learning without some sort of mental interaction or feeling. If one is conscious and if learning takes place, it stands to reason that emotions can arise from a sense of duty to perform a task and a desire to remain alive. While foglets may initially resemble bees or ants on the animal scale, they may achieve a higher intellectual capability later on, when their tasks require them to perform more complex problem-solving missions.

Foglets will have to be somewhat creative in order to complete various tasks such as retrieving missing persons, battling terrorists and reading minds. Those that are used for human behavioral modification may develop the mental capacity that would allow them to feel what people feel, creating a need to examine group consciousness and how it relates to the hive mind as this will be the basis for AGI.

The Psychology of Groupthink

Groupthink is a psychological term that describes the behavior of individuals in a group who adhere to a common ideology or belief system. Often, these individuals make faulty decisions based on group pressures, but overall this mindset makes members more effective in serving the group’s agenda. While groupthink often leads to a deterioration of “mental efficiency, reality testing and moral judgment,” as noted by Irving Janis, an American psychologist who studied the phenomenon, these mental deficiencies actually strengthen the group’s core.

Some of the symptoms of groupthink, as described by Janis, include the following:

  1. Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
  2. Collective rationalization – Members discount warnings and do not reconsider their assumptions.
  3. Beliefs in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
  4. Stereotyped views of out-groups – Negative views of the “enemy” make effective responses to conflict seem unnecessary.
  5. Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.

Moral reasoning and creative thinking may empower the individual, but they do not always serve the group. In fact, they may have just the opposite effect. In the hypothetical case of AGI robots infiltrating an enemy base, moral reasoning on behalf of the foglets can be detrimental to the program. Still, those who are affected by groupthink ignore alternatives to standard beliefs and tend to take irrational actions that dehumanize foreign groups. While some cultures honor forms of group consciousness and see individuality as being harmful to collective harmony, humanity as a whole may be better served if individualism was more tolerated and groupthink was minimized.

A related perspective on this process was described by Swiss psychologist Carl Jung. In describing the individuation process, Jung said, “Every individual needs revolution, inner division, overthrow of the existing order and renewal, but not by forcing these things upon his neighbours under the hypocritical cloak of Christian love or the sense of social responsibility or any of the other beautiful euphemisms for unconscious urges to personal power”.

Groupthink leads to “deindividuation,” immersion into a group to the point where the individual ceases to exercise his higher faculties due to some of the intellectual outcomes noted above. Deindividuation theory states that in the crowd, the collective mind takes possession of the individual. The individual submerged in the crowd loses self-control and becomes a mindless puppet capable of performing any act, however malicious or heroic. The respective experiments of American psychologists Philip Zimbardo (the Stanford prison study) and Stanley Milgram (the Milgram experiment on obedience to authority figures) are classic examples of deindividuation.

Individuals are especially vulnerable to groupthink when a group’s members are similar in background, when the group is insulated from outside opinions and when there are no clear rules for decision-making. For these reasons, it is especially important that creators of AI or AGI have a clear set of rules to follow when creating foglets. These laws or ethical standards should be designed by a diverse group of people who continuously exchange ideas so that corrupted groupthink will be minimized.

It’s possible that scientists who discount warnings about the ethical creation of AI systems and do not reconsider assumptions made in this area are engaging in groupthink behavior and should be questioned about their intentions to create AI foglets. While foglets may be initially unable to contemplate moral issues, my view is that those that program them should attempt to analyze the ethical consequences of creating such artificial life. Groupthink may itself prevent us from adequately considering the downsides of programming AIs like foglets whose experiences are dominated by groupthink.

The Ethics of Military Foglets

Over the past decade, the United States government has spent billions of dollars on nanotechnology research. Beginning in 2001, the annual federal budget for this field of science was $494 million. In 2010 the budget grew to $1.64 billion. The United States is making nanotechnology a priority, in part, because it has major implications for national security. Groupthink will certainly play a part in implementing foglets to battle the “enemy.” Will this benefit the greatest number of people in the world or will it cause further division within humanity? The military will most likely seek to serve national interests rather than seek to benefit the majority of the world’s people. Granting organizations which suffer from militant groupthink full access to such technology will quite possibly be more dangerous to humanity than “grey goo.” One suspects that the more dangerous aspects of groupthink may be overlooked by many military thinkers, along with the potentially negative aspects of groupthink from the foglets’ point of view.

Foglets and the Global Brain

The aim of transhumanism, in one view, is to overcome human nature through self-transformation. This may be seen as a psychological process of integrating the body and mind so that the end result produces a more virtuous human (or transhuman) being, free from societal restraints or cultural belief systems, and completely self-directed.

How can foglets be helpful in such a quest? They will have the potential to cause both help and harm. Since humanity is not presently a cohesive unit, it is unlikely that foglets will act as a cohesive unit which will serve all of humanity. However, one possibility leaps out as particularly interesting – how might foglets intersect with humans in a future where humans are more tightly and cohesively bound together, perhaps in a “global brain” scenario? If humanity ever unites as a whole, then foglets may work in harmony with the hive mind, rather than remaining subjugated to our corrupted forms of groupthink. Here we have a vision of foglets, humans and other AIs integrated into a hive mind, transcending both individual minds and groupthink as currently conceived.

27 Comments

  1. One thing is for certain: It will not take too much AI to conclude that humans are the casue of many problems this world faces. I therefore submit that any form of AI with self preserving abilities will quickly snuff out the human race as soon as possible.

  2. if these foglets can be used to replicate, build, and design, then i say that they can be used for terraformation of say the moon. if shot onto the moon, then used to replicate, and build a city, and even grow plants for breathing, and food. with the plants and a air tight building, as well as a method of converting elemental molecules, one could make water. solar panels to power everthing, and a hive mind to continue expansion, building, replication, ect, then a atmoshpehere could be eventually built, as well as a ecosystem. now dont get me wrong, im not saying this decade or even next, but with a few more massive increases in technologies like AI, AGI, nanotech, memory storage, and power generation and efficiancy, one could essentially terraform a planet for human expansion ina matter of decades and not centuries. although i am against the possibility of fully cognitive and emo robots, theres way too many B-movies that shows what will happen if that comes to pass, or at the least wont work till humans stop being so illogical and emo emo themselves.

  3. Obviously, both standardization (groupthink) and innovation (individuation) are necessary for human survival. The goal is to optimize both, oscillating them whenever most advantageous. “Minimizing” groupthink is a strategic error. Think of Yang (groupthink) and Yin (individuation). We must seek the Dao.

    • Individuation integrates the ego and reduces individualism. Individualism is a step away from group think. This is the rebellious stage where people break away from their cultural attitudes and beliefs and find their own creativity.

      After you overcome yourself you can expand your imagination so that you do not perceive the world from your ego or group. Nietzsche described this process in great detail in Thus Spoke Zarathustra. Nietzsche taught Sigmund Freud’s mentor about this process which enabled Freud and Jung to describe this evolution of consciousness. I think it’s important to learn about his process since the evolution of human consciousness will create a higher standard of ethics in this technology.

      Nietzsche’s idea of “the overman” (Ubermensch) is one of the most significant concept in his thinking. An overman as described by Zarathustra, the main character in Thus Spoke Zarathustra, is the one who is willing to risk all for the sake of enhancement of humanity. An overman is then someone who has a life which is not merely to live each day with no meanings when nothing in the past and future is more important than the present, or more precisely, the pleasure and happiness in the present, but with the purpose for all of humanity.https://ccrma.stanford.edu/~pj97/Nietzsche.htm

      Humanity encompasses all humans, not just specific groups. I referred to the consciousness of humanity as the hive mind because I wanted to make a clear distinction between the concept of the mind of humanity and group consciousness.

      • Foggy indeed. Maybe Superman X-ray vision is called for.

  4. Foglets created for the “hive mind” will have a purpose to benefit ALL of humanity and the global brain. Wars wouldn’t exist if there was no sense of group think.

  5. Nice to know we can look forward to actual fog o war now.

  6. Where all of you err is separating the machine from bio. Moving above the level of AI that achieves the ability of becoming extension of its user and its resulting prediction via logic, you increase the ability of observing.

    With observing, it is a matter of the machine recognizing emotion in the human, identifying pleasure vs. displeasure.

    Insofar as the grand conspiracy stuff, we need to remember treatise along the way of societal evolution. We start with guidelines associated with safety in the narrow AI. As the AI develops past extension to random participator, debate begins to include the AI. This would help in their understanding levels of concern associated with ‘off the cuff’ measurement of pleasure/displeasure.

    There will be interesting moments of exchange during this time period, for the pool of participants on the bio side will be broader.

    Then we can focus on pushing things to the positive (pleasure) rather than so much on control focusing on death.

  7. It sounds a lot like a simplistic copy of a biological body consisting of billions of individual cells coorporating as a coherent whole.

  8. Very good point Hedonic Treader (love that name, read positive psychology much? 😉 )

    Well, suffering – like happiness – falls pretty much in the “I know it when I see it” category. Currently there is no way to really get at the root of what exactly pain and pleasure is.

    That insight will probably be locked away until we also find out how exactly grey meat can give rise to subjectivity or consciousness. Or if we’re lucky pain and pleasure prove to be simpler… but even if we can artificially replicate it within the Blue Brain by simulating a clump of neurons, we still can’t be sure if we got it right since we ourselves still can’t “test”feel it, once it is simulated.

    Ultimately it’s exactly like Eliezer Yudkowsky says: The human mind is just one dot inside a whole methaphorical sphere of possible minds. There might be states of mind and feeling that we can’t even conceive of, because they haven’t evolved in us. There are millions of ways to build intelligence that is different from human intelligence, but only one (or very few) ways to make AI somewhat like human intelligence.

    We have a very strong tendency to anthropomorphesize AI, and we must definitely be aware of our cognitive bias of thinking that AI necessarily has to be quite similar to biological intelligence.

    There is every reason to believe that it ain’t so.

    • Currently there is no way to really get at the root of what exactly pain and pleasure is.

      Well, understanding the implementation details of the neuronal activation pathways from sensory input to motivational output of would certainly help. There must be an empirically identifiable reason why you get an “ouch” reaction when you stimulate specific parts of the insular cortex, but not other ones.

      I wouldn’t recommend spending too much time on pondering the “hard problem” of subjectivity for this; let’s get a really good and fine-grained neuroscientific outline of the implementation details of affective states, and we might gain surprising amounts of clarity.

      The human mind is just one dot inside a whole methaphorical sphere of possible minds. There might be states of mind and feeling that we can’t even conceive of, because they haven’t evolved in us.

      Given the diversity of experiences such as smells, sounds, colors etc., this is almost certainly true, but the parts that we really care about ethically are mostly based on a dualistic “goodness – badness” axis, i.e. negative or positive affect. There is a certain simplicity in this, and I suspect it might be possible to extract a quite basic information-theoretic affect principle from the human brain’s implementation details that can then be generalized to other systems.

      Or not. But if it’s not possible, we should at least find that out and formalize our confusion.

  9. It might be worthwhile to get a formal information-theoretic definition of what suffering is. A starting point could be an analysis of exactly how affective states are implemented in the human brain, and then generalize from those principles.

    So far, I’ve only managed to find correlations and de-correlations between representational aspects (e.g. somatosensoric intensity vs. badness of pain), and regional locations in the brain in imaging studies (e.g. parts of the cingualte and insular cortices). I still don’t see the big picture of exactly what constitutes mental states like suffering and pleasure.

    If we had a formal definition of what affective states really are, and this definition would be powerful enough to explain human affect, but also broad enough to apply to non-human systems like foglets, AGI and non-human animals, this could be used to inform ethical considerations, maybe even in the form of a formal utility calculus.

  10. The question I have is why would a foglet need to be “sentient” in the sense of having human like motives and emotions? It can be highly complex, extremely versatile, efficient at problem solving etc, without ever needing to have human like “sentience” or self will.

    UF is a TOOL. While UF could be used to create a body for an AGI, why would UF ITSELF have any need to be capable of sentience?

    I’m not disputing that UF AGI is possible. I am simply unconvinced that AGI will emerge spontaneously simply out of mere complexity. UF has no need of sentience to perform it’s function as a tool, even if it is a tool with immense problem solving abilities and extremely high levels of recursive self improvement.

    • Why do humans or other animals need to be sentient? In theory, we might perform much better if we weren’t.

      • Maybe this depends on your view of what sentience is? A functionalist might say we wouldn’t function at all without sentience, and considering that natural selection only works on fitness-enhancing functions, and our brains evolved by natural selection, I think they have a strong point.

        • Sentience and intelligence seem to go hand in hand. AGI’s will probably be expected to learn from their environments and adapt to new situations.

          In theory, humans might be better off it they weren’t sentient. Theories don’t always hold weight in real life.

          What are emotions good for? I’ve wondered why we have to have them. I don’t have the answers, but I hope we keep searching.

          Intelligence:”The ability to learn and understand, the ability to cope with a new situation” Many animals, including primates, pigs, and dolphins, have been shown to have very high intelligence. Some AI’s have also been shown to possess a high level of intelligence. http://van.physics.illinois.edu/qa/listing.php?id=832

          • What are emotions good for? I’ve wondered why we have to have them. I don’t have the answers, but I hope we keep searching.
            Well, what would you do without emotions? Emotions are evolved heuristics that create (originally adaptive) behavioral dispositions from sensory input.

            • I think what she might have meant isn’t literally what emotions are good for, but rather why do we need to feel the pain, and why we hurt.

              Why don’t neurons just go *beep* . Why does it actually hurt instead of just being “registered”?

              In other words: does it need to hurt badly so that we won’t repeat our mistakes again, because simply registering it wouldn’t create a strong enough incentive to avoid it in the future?

              The fact that we really hurt if bad things happen to us is just an evolutionary adaptation, that ensures doubly that we won’t try bad things again. The question then is: does an AI also need to emotionally hurt, or can it simply “register” bad things and avoid them in the same fasion as living things do but without all the emotional drama.

              • Why does it actually hurt instead of just being “registered”?

                It depends on what you mean by being “registered”. If you mean registered as in “interpreted as a stimulus of aversive valence”, i.e. something bad, then that could be the same thing as “actually hurting”.

                The question then is: does an AI also need to emotionally hurt, or can it simply “register” bad things and avoid them in the same fasion as living things do but without all the emotional drama.

                Again, my intuitive interpretation would be that “emotionally hurt” and “register bad things” are just different labels for the same informational principle. In order to register that a “bad thing” has happened, the AI will need some kind of heuristic with which it can judge its input as “bad” rather than indifferent or good. Is that really a different category from what you call “emotional drama”, or are both maybe the same generic informational category (affective valence)?

  11. Hm. Interesting comment, thank you.

    Maybe Eliezer Yudkowsky would be the right person to pester with this question. I’ll think about doing just that…

    I am partly under the impression that we might be arguing semantics here though. (And even worse – it’s probably my fault!)

    If an AGI has a set of top-level goals that are solely determined by humans and not by the machine, then the sub-goals it sets in order to accomplish its top-goals can be entirely determined and executed by the machine, without consulting a human first…

    but is that sufficient to label the AGI as an “autonomous” entity with own motives? Even if it posesses consciousness (whatever that turns out to be), it is still not autonomous in the same sense as a human, if it cannot update or set its own top-goals.

    It will still be our “slave”, but not necessarily in a bad or “inhumane” way…

    Desires of self-expression and freedom are typically human and need not be present in an AGI. So an AGI without the freedom to set and pursue its own top-goals doesn’t necessarily suffer or feel the desire to be free, no matter how intelligent and self-reflective it may be.

    The point I tried to make was that I can’t see why highly advanced artificial intelligence with consciousness would necessarily require or enable the ability to set top-goals autonomously – which is what I meant by agency and motives.

    I don’t feel creating an AGI with the ability to self-select top level goals would be very smart. Its goals will necessarily deviate from ours in some ways and unless it peacefully leaves our corner of the universe to build a dyson-sphere somewhere, we’ll definitely have a conflict of interest which we might quite likely lose.

    Moreover, humans cannot override or restructure their own behavior and their brains. If an AGI cannot just update its top-goals, but even itself and its own thinking-structure, then there’s no guarantee that it will feel pity or gratitude for its creators, even if something like that is programmed into it at the beginning.

    A truly autonomous AGI could be very, very dangerous indeed. You could never, ever build such a thing without insane risks.

    • No matter how intelligent and self-reflective AGI may be, they will never suffer or desire to be free?

      Remember that Aristotle claimed that no other animal besides a human can feel pain. Even when they performed vivisection on dogs the scientists dismissed the dog’s crying and yelping and compared it to simple mechanisms of a clock. That was group think which caused the scientists to believe Aristotle. We now know that animals feel pain and most animals cling to life no matter how bad it gets. Assuming that only humans have rights or suffer is short sighted.

      Technological evolution is not the same as biological evolution, but the analogy asks the question of whether or not we should embark on creating AGI without proper foresight. Increasing human intelligence will probably be needed in order to sort out all the ethical implications and lay down some ground rules.

      • I’m sympathetic to your position. Obviously if for some reason an AGI develops feelings, its own top-goals, and the desire to be free – then it should be granted its own rights as an autonomous, sentient lifeform. No question.

        But as I said, building such an intelligence is incredibly risky, so the goal must be to avoid building it “wrong” aka “with self-interest” in the first place.

        Human psychology is very similar across the board and yet overall we seemingly can’t agree on anything. An autonomous AI would verly likely set goals entirely different from any human you can imagine, and this will result in a conflict that may not be solved by compromise.

        My whole point I’m trying to get across here is this:

        There is nothing sacred or intrinsically right about the way our human mind (or animal minds in general) work – it’s just a patchwork of evolutionary history. And neither is there anything intrinsically good or right about having our type of emotions or our way of thinking. So yes, no matter how intelligent and self-reflective an AI gets, in principle it should be quite possible to build it in such a way, that it won’t have the desire to be free from -or bored by- our human requests and preferences.

        If you build an AI with a simplistic attitude of “if we make it intelligent, sentient, pack into it some moral notions and the ability to learn, then it will probably turn out all right and see things the way we do” – then this is recipe for disaster. It won’t work out well this way.

        Creating an AGI must be an incredibly precise science. You can’t just mash some notions of how human or biological psychology works together in the hopes it will be good enough to create friendly AI. If you want a friendly AI, you must be incredibly precise with what you’re doing… as I said, the human mind is only a lonely dot in a whole sphere of possible minds.

        If we try to build a friendly AI that will actually help us and make human life worthwile and better, then we have to hit the exact sweet-spot.
        There’s a billion ways to cock it up, and only one or incredibly few to get it just right – and missing the mark by just an inch may result in disaster.

        If we create something that we don’t understand, then we’ll lose almost by default. Good enough isn’t good enough, it must be close to perfect in order for friendly AI to work out in our favor.

        Debating and hoping how a friendly AI will “develop emotionally” once we set it free to shape our world is ludicrous. We must make it perfect on the first try, or it’s “close but no cigar” for all of humankind.

        We must understand absolutely everything about it before we build it or set it free, there can be no uncertainty about what and how it will feel, or we’ll almost certainly pay for our ignorance.

  12. Ouroboros, you say: “Why not build a conscious über-intelligent AGI without its own motives / autonomus agency? “

    From my point of view as an AGI designer, one reason is clear: I don’t know how to build a system like that. But I do think I know how to build an AGI with a roughly humanlike cognitive architecture, in which motivation, action, perception and cognition are bound together in fairly tight loops.

    But I think it goes deeper than that. I think it’s actually in-principle not feasible to make an AGI system like you describe.

    Even if an AGI is assigned a fixed top-level goal to guide its behaviors, it will have to derive subgoals (explicitly or implicitly) in order to get anything done; and this derivation will involve some contextuality and uncertainty … and then the subgoals, which are derived by the system based on its own ideas, will be what concretely guides the system’s behavior.

    An AGI that directly pursues human-supplied goals rather than self-derived subgoals is probably infeasible, given the reality of limited computing resources.

    But that doesn’t mean AGI systems will necessarily be as capricious in their motivational structures as humans are. We’re moving into unknown territory here and we’ll figure out how all the factors balance out by experiment, formulating better theories as we move along.

    • “An AGI that directly pursues human-supplied goals rather than self-derived subgoals is probably infeasible, given the reality of limited computing resources.”

      Is it possible to create a self-destruction mode? That sounds sick and raises more issues. If the UF resided inside a biological animal, could they self-destruct by killing the animal? How could you program them to obliterate themselves if they have subgoals or a “desire” to live?

  13. Sentient, suffering foglets?

    Ridiculous idea. But not in that I necessarily deny the possibility of building a nanobot hive-mind with such properties…

    It’s ridiculous because why should we build into them the capability of suffering in the first place? Or program them with the possibility of self-selecting their own motives and goals for that matter?

    My ciriticism is concerned with something virtually every single article on AGI as of late is simply assuming as a premise for its further dystopian conclusions, which is this:

    “Intelligence requires autonomous motives and desires, as well as the capacity to suffer and enjoy in order to choose paths of action.”

    Why do we assume that an AGI will necessarily need its own motives and preferences? I’m not saying its impossible to build such an autonomous “free-willed” entity, but I rather wonder why we should take that insane risk in the first place.

    Sentience and consciousness does not automatically require the possibility of suffering and own motives. Just because these mental capabilities have naturally evolved together in humans (for obvious reasons) does not mean, that artificial consciousness necessarily requires autonomous preferences or the capacity to suffer as well.

    These are clearly biologically evolved mental activities. Arguments that they are necessarily required to build an AGI are very poor and scarce indeed, as far as I can see.

    Why not build a conscious über-intelligent AGI without its own motives / autonomus agency? We have plenty of our own motives, so we can easily provide our machines and AIs with our own goals, while they provide us with the solutions.

    Humans would be insane to willfully create a sophisticated autonomous AGI fully capable of self-selecting its own goals instead of abiding to ours.

    Oh right, I almost forgot… humans are plenty insane.

    Second of all… why the hell build self-replicating nanobots when you can build plenty of “sterile” (and thus cheaper) ones in a factory or a 3D printer without the risk of malfuntion and thus “nanocancer”.

    • Why would we program them to be sentient? That is not the goal. It may be a biproduct of intelligence. How can you learn or be creative without having some form of emotion. 93 percent of communication is nonverbal and it reaches us through emotional filters. If we simply analyzed facts and data, we would miss most of what is going on between the sender and receivers. UF won’t need its own motives or preferences to be used by us. We might be better off if we didn’t have our own motives or preferences either. What kind of world you want to live in is the one you help to create.

Leave a Reply