The concept of utility fog – flying, intercommunicating nanomachines that dynamically shape themselves into assorted configurations to serve various roles and execute multifarious tasks – was introduced by nanotech pioneer J. Storrs Hall in 1993. Recently in H+ Magazine, Hall pointed out that swarm robots are the closest thing we have to utility fog. This brings the concept a little bit closer to reality.
For instance, a few years ago Dr. James McLurkin of the Massachusetts Institute of Technology (MIT) demonstrated 112 swarm robots at the Idea Fest in Louisville Kentucky. They communicated with one another and operated as a cohesive unit to complete their tasks. Currently, some swarm robots can even self-assemble and self-replicate. These precursors to future foglets measure about 4.5 inches in diameter – a far cry from the nanoscale, but nevertheless demonstrating some scale-independent principles of collective intelligence. These kinds of swarm robots may be seen as early steps toward the creation of utility foglets. In time, it will become possible for self-replicating robots to be built on the scale of nanoparticles, even as their intelligence is increased to carry out missions which humans are either unwilling or unable to perform themselves.
However, if a future foglet ever became conscious enough to dissent from its assigned task and spread new information to the hive mind, this might cause other constituent foglets to deviate from their assigned tasks. This could result in various undesirable consequences, maybe even the much-hyped scenario in which rampant nanotech turns the world into some sort of “grey goo.”
Eric Drexler, who coined “grey goo” in his seminal 1986 work on nanotechnology, “Engines of Creation,” now resents the term’s spread since it is often used to conjure up fears of a nanotech-inspired apocalypse. However, thinking about the consequences of a radical new technology like utility fogs is useful when considering the creation of foglets from an ethical standpoint. The notion of robots that are programmed to obey us blindly – like foglets in a utility fog – should impel researchers to ponder the moral justification of creating sentient life which cannot exercise freedom.
Should we attempt to create artificially generated intelligence (AGI) in a manner that resembles what we would wish for ourselves? An intelligent creator would not allow his creatures to suffer unless he was a sadist or at the very least, cruelly indifferent. Since humans will be the creators of utility fog, we should at least try to imagine what the future holds in store for foglets. In order to prevent our creations from suffering, we may need to enact a code of conduct which examines the ethics of creating artificial intelligence (AI). Such laws will need to be written from the perspectives of both the creations and the creators.
What Is It Like to Be a Foglet?
Is it ridiculous to worry about the subjective experience of utility foglets? It seems not, because their intelligent, adaptive capability may come along with a commensurately rich inner experience. In order for artificial life to be considered intelligent, it must in some sense be aware of its environment and learn how to interact with it. While the philosophy of consciousness is a subtle matter, it seems reasonable to propose that there is no learning without some sort of mental interaction or feeling. If one is conscious and if learning takes place, it stands to reason that emotions can arise from a sense of duty to perform a task and a desire to remain alive. While foglets may initially resemble bees or ants on the animal scale, they may achieve a higher intellectual capability later on, when their tasks require them to perform more complex problem-solving missions.
Foglets will have to be somewhat creative in order to complete various tasks such as retrieving missing persons, battling terrorists and reading minds. Those that are used for human behavioral modification may develop the mental capacity that would allow them to feel what people feel, creating a need to examine group consciousness and how it relates to the hive mind as this will be the basis for AGI.
The Psychology of Groupthink
Groupthink is a psychological term that describes the behavior of individuals in a group who adhere to a common ideology or belief system. Often, these individuals make faulty decisions based on group pressures, but overall this mindset makes members more effective in serving the group’s agenda. While groupthink often leads to a deterioration of “mental efficiency, reality testing and moral judgment,” as noted by Irving Janis, an American psychologist who studied the phenomenon, these mental deficiencies actually strengthen the group’s core.
Some of the symptoms of groupthink, as described by Janis, include the following:
- Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
- Collective rationalization – Members discount warnings and do not reconsider their assumptions.
- Beliefs in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
- Stereotyped views of out-groups – Negative views of the “enemy” make effective responses to conflict seem unnecessary.
- Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
Moral reasoning and creative thinking may empower the individual, but they do not always serve the group. In fact, they may have just the opposite effect. In the hypothetical case of AGI robots infiltrating an enemy base, moral reasoning on behalf of the foglets can be detrimental to the program. Still, those who are affected by groupthink ignore alternatives to standard beliefs and tend to take irrational actions that dehumanize foreign groups. While some cultures honor forms of group consciousness and see individuality as being harmful to collective harmony, humanity as a whole may be better served if individualism was more tolerated and groupthink was minimized.
A related perspective on this process was described by Swiss psychologist Carl Jung. In describing the individuation process, Jung said, “Every individual needs revolution, inner division, overthrow of the existing order and renewal, but not by forcing these things upon his neighbours under the hypocritical cloak of Christian love or the sense of social responsibility or any of the other beautiful euphemisms for unconscious urges to personal power”.
Groupthink leads to “deindividuation,” immersion into a group to the point where the individual ceases to exercise his higher faculties due to some of the intellectual outcomes noted above. Deindividuation theory states that in the crowd, the collective mind takes possession of the individual. The individual submerged in the crowd loses self-control and becomes a mindless puppet capable of performing any act, however malicious or heroic. The respective experiments of American psychologists Philip Zimbardo (the Stanford prison study) and Stanley Milgram (the Milgram experiment on obedience to authority figures) are classic examples of deindividuation.
Individuals are especially vulnerable to groupthink when a group’s members are similar in background, when the group is insulated from outside opinions and when there are no clear rules for decision-making. For these reasons, it is especially important that creators of AI or AGI have a clear set of rules to follow when creating foglets. These laws or ethical standards should be designed by a diverse group of people who continuously exchange ideas so that corrupted groupthink will be minimized.
It’s possible that scientists who discount warnings about the ethical creation of AI systems and do not reconsider assumptions made in this area are engaging in groupthink behavior and should be questioned about their intentions to create AI foglets. While foglets may be initially unable to contemplate moral issues, my view is that those that program them should attempt to analyze the ethical consequences of creating such artificial life. Groupthink may itself prevent us from adequately considering the downsides of programming AIs like foglets whose experiences are dominated by groupthink.
The Ethics of Military Foglets
Over the past decade, the United States government has spent billions of dollars on nanotechnology research. Beginning in 2001, the annual federal budget for this field of science was $494 million. In 2010 the budget grew to $1.64 billion. The United States is making nanotechnology a priority, in part, because it has major implications for national security. Groupthink will certainly play a part in implementing foglets to battle the “enemy.” Will this benefit the greatest number of people in the world or will it cause further division within humanity? The military will most likely seek to serve national interests rather than seek to benefit the majority of the world’s people. Granting organizations which suffer from militant groupthink full access to such technology will quite possibly be more dangerous to humanity than “grey goo.” One suspects that the more dangerous aspects of groupthink may be overlooked by many military thinkers, along with the potentially negative aspects of groupthink from the foglets’ point of view.
Foglets and the Global Brain
The aim of transhumanism, in one view, is to overcome human nature through self-transformation. This may be seen as a psychological process of integrating the body and mind so that the end result produces a more virtuous human (or transhuman) being, free from societal restraints or cultural belief systems, and completely self-directed.
How can foglets be helpful in such a quest? They will have the potential to cause both help and harm. Since humanity is not presently a cohesive unit, it is unlikely that foglets will act as a cohesive unit which will serve all of humanity. However, one possibility leaps out as particularly interesting – how might foglets intersect with humans in a future where humans are more tightly and cohesively bound together, perhaps in a “global brain” scenario? If humanity ever unites as a whole, then foglets may work in harmony with the hive mind, rather than remaining subjugated to our corrupted forms of groupthink. Here we have a vision of foglets, humans and other AIs integrated into a hive mind, transcending both individual minds and groupthink as currently conceived.