The Space of Mind Designs and the Human Mental Model

[Editor’s note: This is an excerpt from Roman’s excellent book, Artificial Superintelligence: A Futuristic Approach, republished with permission. Get it on Amazon here and use this discount coupon for a special price only available to h+ Magazine readers. Discount code at the bottom of the page]

 

 

Screen Shot 2015-09-02 at 7.31.37 AM

 

INTRODUCTION

In 1984, Aaron Sloman published “The Structure of the Space of Possible Minds,” in which he described the task of providing an interdisciplinary description of that structure. He observed that “behaving systems” clearly comprise more than one sort of mind and suggested that virtual machines may be a good theoretical tool for analyzing mind designs. Sloman indicated that there are many discontinuities within the space of minds, meaning it is not a continuum or a dichotomy between things with minds and without minds (Sloman 1984). Sloman wanted to see two levels of exploration: descriptive, surveying things different minds can do, and exploratory, looking at how different virtual machines and their properties may explain results of the descriptive study (Sloman 1984). Instead of trying to divide the universe into minds and nonminds, he hoped to see examination of similarities and differences between systems. In this chapter, I make another step toward this important goal.

What is a mind? No universal definition exists. Solipsism notwithstanding, humans are said to have a mind. Higher-order animals are believed to have one as well, and maybe lower-level animals and plants, or even all life-forms. I believe that an artificially intelligent agent, such as a robotor a program running on a computer, will constitute a mind. Based on analysis of those examples, I can conclude that a mind is an instantiated intelligence with a knowledge base about its environment, and although intelligence itself is not an easy term to define, the work of Shane Legg provides a satisfactory, for my purposes, definition (Legg and Hutter 2007). In addition, some hold a point of view known as panpsychism, attributing mind-like properties to all matter. Without debating this possibility, I limit my analysis to those minds that can actively interact with their environment and other minds. Consequently, I do not devote any time to understanding what a rock is thinking.

If we accept materialism, we also have to accept that accurate software simulations of animal and human minds are possible. Those are known as uploads (Hanson 1994), and they belong to a class comprising computer programs no different from that to which designed or artificially evolved intelligent software agents would belong. Consequently, we can treat the space of all minds as the space of programs with the specific property of exhibiting intelligence if properly embodied. All programs could be represented as strings of binary numbers, implying that each mind can be represented by a unique number. Interestingly, Nick Bostrom, via some thought experiments, speculates that perhaps it is possible to instantiate a fractional number of mind, such as 0.3 mind, as opposed to only whole minds (Bostrom 2006). The embodiment requirement is necessary, because a string is not a mind, but could be easily satisfied by assuming that a universal Turing machine (UTM) is available to run any program we are contemplating for inclusion in the space of mind designs. An embodiment does not need to be physical as a mind could be embodied in a virtual environment represented by an avatar (Yampolskiy and Gavrilova 2012; Yampolskiy, Klare, and Jain 2012) and react to a simulated sensory environment like a “brain-in-a-vat” or a “boxed” artificial intelligence (AI) (Yampolskiy 2012b).

 

INFINITUDE OF MINDS

Two minds identical in terms of the initial design are typically considered to be different if they possess different information. For example, it is generally accepted that identical twins have distinct minds despite exactly the same blueprints for their construction. What makes them different is their individual experiences and knowledge obtained since inception. This implies that minds cannot be cloned because different copies would immediately after instantiation start accumulating different experiences and would be as different as twins.

If we accept that knowledge of a single unique fact distinguishes one mind from another, we can prove that the space of minds is infinite. Suppose we have a mind M, and it has a favorite number N. A new mind could be created by copying M and replacing its favorite number with a new favorite number N + 1. This process could be repeated infinitely, giving us an infinite set of unique minds. Given that a string of binary numbers represents an integer, we can deduce that the set of mind designs is an infinite and countable set because it is an infinite subset of integers. It is not the same as a set of integers because not all integers encode a mind.

Alternatively, instead of relying on an infinitude of knowledge bases to prove the infinitude of minds, we can rely on the infinitude of designs or embodiments. The infinitude of designs can be proven via inclusion of a time delay after every computational step. First, the mind would have a delay of 1 nanosecond, then a delay of 2 nanoseconds, and so on to infinity. This would result in an infinite set of different mind designs. Some will be very slow, others superfast, even if the underlying problem-solving abilities are comparable. In the same environment, faster minds would dominate slower minds proportionately to the difference in their speed. A similar proof with respect to the different embodiments could be presented by relying on an ever-increasing number of sensors or manipulators under control of a particular mind design.

Also, the same mind design in the same embodiment and with the same knowledge base may in fact effectively correspond to a number of different minds, depending on the operating conditions. For example, the same person will act differently if under the influence of an intoxicating substance, severe stress, pain, or sleep or food deprivation, or when experiencing a temporary psychological disorder. Such factors effectively change certain mind design attributes, temporarily producing a different mind.

 

SIZE, COMPLEXITY, AND PROPERTIES OF MINDS

Given that minds are countable, they could be arranged in an ordered list, for example, in order of numerical value of the representing string. This means that some mind will have the interesting property of being the smallest. If we accept that a UTM is a type of mind and denote by (m, n) the class of UTMs with m states and n symbols, the following UTMs have been discovered: (9, 3), (4, 6), (5, 5), and (2, 18). The (4, 6)-UTM uses only 22 instructions, and no less-complex standard machine has been found (“Universal Turing Machine” 2011). Alternatively, we may ask about the largest mind. Given that we have already shown that the set of minds is infinite, such an entity does not exist. However, if we take into account our embodiment requirement, the largest mind may in fact correspond to the design at the physical limits of computation (Lloyd 2000).

Another interesting property of minds is that they all can be generated by a simple deterministic algorithm, a variant of a Levin search (Levin 1973): Start with an integer (e.g., 42) and check to see if the number encodes a mind; if not, we discard the number. Otherwise, we add it to the set of mind designs and proceed to examine the next integer. Every mind will eventually appear on our list of minds after a predetermined number of steps. However, checking to see if something is in fact a mind is not a trivial procedure. Rice’s theorem (Rice 1953) explicitly forbids determination of nontrivial properties of random programs. One way to overcome this limitation is to introduce an arbitrary time limit on the mind-or-not-mind determination function, effectively avoiding the underlying halting problem.

Analyzing our mind design generation algorithm, we may raise the question of a complexity measure for mind designs, not in terms of the abilities of the mind, but in terms of complexity of design representation. Our algorithm outputs minds in order of their increasing value, but this is not representative of the design complexity of the respective minds. Some minds may be represented by highly compressible numbers with a short representation such as 1013, and others may comprise 10,000 completely random digits, for example, 735834895565117216037753562914 … (Yampolskiy 2013b). I suggest that a Kolmogorov complexity (KC) (Kolmogorov 1965) measure could be applied to strings representing mind designs. Consequently, some minds will be rated as “elegant” (i.e., having a compressed representation much shorter than the original string); others will be “efficient,” representing the most efficient representation of that particular mind. Interesting elegant minds might be easier to discover than efficient minds, but unfortunately, KC is not generally computable.

In the context of complexity analysis of mind designs, we can ask a few interesting philosophical questions. For example, could two minds be added together (Sotala and Valpola 2012)? In other words, is it possible to combine two uploads or two artificially intelligent programs into a single, unified mind design? Could this process be reversed? Could a single mind be separated into multiple nonidentical entities, each in itself a mind? In addition, could one mind design be changed into another via a gradual process without destroying it? For example, could a computer virus (or even a real virus loaded with the DNA of another person) be a sufficient cause to alter a mind into a predictable type of other mind? Could specific properties be introduced into a mind given this virus-based approach? For example, could friendliness (Yudkowsky 2001) be added post-factum to an existing mind design?

Each mind design corresponds to an integer and so is finite, but because the number of minds is infinite, some have a much greater number of states compared to others. This property holds for all minds. Consequently, because a human mind has only a finite number of possible states, there are minds that can never be fully understood by a human mind, as such mind designs have a much greater number of states, making their understanding impossible, as can be demonstrated by the pigeonhole principle.

 

SPACE OF MIND DESIGNS

Overall, the set of human minds (about 7 billion of them currently available and about 100 billion that ever existed) is homogeneous in terms of both hardware (embodiment in a human body) and software (brain design and knowledge). In fact, the small differences between human minds are trivial in the context of the full infinite spectrum of possible mind designs. Human minds represent only a small constant-size subset of the great mind landscape. The same could be said about the sets of other earthly minds, such as dog minds, bug minds, male minds, or in general the set of all animal minds.

Given our algorithm for sequentially generating minds, one can see that a mind could never be completely destroyed, making minds theoretically immortal. A particular mind may not be embodied at a given time, but the idea of it is always present. In fact, it was present even before the material universe came into existence. So, given sufficient computational resources, any mind design could be regenerated, an idea commonly associated with the concept of reincarnation (Fredkin 1982). Also, the most powerful and most knowledgeable mind has always been associated with the idea of Deity or the Universal Mind.

Given my definition of mind, we can classify minds with respect to their design, knowledge base, or embodiment. First, the designs could be classified with respect to their origins: copied from an existing mind like an upload, evolved via artificial or natural evolution, or explicitly designed with a set of particular desirable properties. Another alternative is what is known as a Boltzmann brain—a complete mind embedded in a system that arises due to statistically rare random fluctuations in the particles comprising the universe, but is likely due to the vastness of the cosmos (De Simone et al. 2010).

Last, a possibility remains that some minds are physically or informationally recursively nested within other minds. With respect to the physical nesting, we can consider a type of mind suggested by Kelly (2007b), who talks about “a very slow invisible mind over large physical distances.” It is possible that the physical universe as a whole or a significant part of it comprises such a megamind. This theory has been around for millennia and has recently received some indirect experimental support (Krioukov et al. 2012). In this case, all the other minds we can consider are nested within such a larger mind. With respect to the informational nesting, a powerful mind can generate a less-powerful mind as an idea. This obviously would take some precise thinking but should be possible for a sufficiently powerful artificially intelligent mind. Some scenarios describing informationally nested minds are analyzed in work on the AI confinement problem (Yampolskiy 2012b). Bostrom, using statistical reasoning, suggests that all observed minds, and the whole universe, are nested within a mind of a powerful computer (Bostrom 2003). Similarly, Lanza, using a completely different and somewhat controversial approach (biocentrism), argues that the universe is created by biological minds (Lanza 2007). It remains to be seen if given a particular mind, its origins can be deduced from some detailed analysis of the mind’s design or actions.

Although minds designed by human engineers comprise only a tiny region in the map of mind designs, they probably occupy the best-explored part of the map. Numerous surveys of artificial minds, created by AI researchers in the last 50 years, have been produced (Miller 2012; Cattell and Parker 2012; de Garis et al. 2010; Goertzel et al. 2010; Vernon, Metta, and Sandini 2007). Such surveys typically attempt to analyze the state of the art in artificial cognitive systems and provide some internal classification of dozens of the reviewed systems with regard to their components and overall design. The main subcategories into which artificial minds designed by human engineers can be placed include neuron-level brain emulators (de Garis et al. 2010), biologically inspired cognitive architectures (Goertzel et al. 2010), physical symbol systems, emergent systems, and dynamical and enactive systems (Vernon, Metta, and Sandini 2007). Rehashing information about specific architectures presented in such surveys is beyond the scope of this book, but one can notice incredible richness and diversity of designs even in this tiny area of the overall map we are trying to envision. For those particularly interested in an overview of superintelligent minds, animal minds, and possible minds in addition to the surveys mentioned, “Artificial General Intelligence and the Human Mental Model” is highly recommended (Yampolskiy and Fox 2012).

For each mind subtype, there are numerous architectures, which to a certain degree depend on the computational resources available via a particular embodiment. For example, theoretically a mind working with infinite computational resources could trivially use brute force for any problem, always arriving at the optimal solution, regardless of its size. In practice, limitations of the physical world place constraints on available computational resources regardless of the embodiment type, making the brute force approach an infeasible solution for most real-world problems (Lloyd 2000). Minds working with limited computational resources have to rely on heuristic simplifications to arrive at “good enough” solutions (Yampolskiy, Ashby, and Hassan 2012; Ashby and Yampolskiy 2011; Hughes and Yampolskiy 2013; Port and Yampolskiy 2012).

Another subset of architectures consists of self-improving minds. Such minds are capable of examining their own design and finding improvements in their embodiment, algorithms, or knowledge bases that will allow the mind to more efficiently perform desired operations (Hall 2007b). It is likely that possible improvements would form a Bell curve with many initial opportunities for optimization toward higher efficiency and fewer such options remaining after every generation. Depending on the definitions used, one can argue that a recursively self-improving mind actually changes itself into a different mind, rather than remaining itself, which is particularly obvious after a sequence of such improvements. Taken to the extreme, this idea implies that a simple act of learning new information transforms you into a different mind, raising millennia-old questions about the nature of personal identity.

With respect to their knowledge bases, minds could be separated into those without an initial knowledge base, which are expected to acquire their knowledge from the environment; minds that are given a large set of universal knowledge from the inception; and those minds given specialized knowledge only in one or more domains. Whether the knowledge is stored in an efficient manner, compressed, classified, or censored is dependent on the architecture and is a potential subject of improvement by self modifying minds.

One can also classify minds in terms of their abilities or intelligence. Of course, the problem of measuring intelligence is that no universal tests exist. Measures such as IQ tests and performance on specific tasks are not universally accepted and are always highly biased against nonhuman intelligences. Recently, some work has been done on streamlining intelligence measurements across different types of machine intelligence (Legg and Hutter 2007; Yonck 2012) and other “types” of intelligence (Herzing 2014), but the applicability of the results is still debated. In general, the notion of intelligence only makes sense in the context of problems to which said intelligence can be applied. In fact, this is exactly how IQ tests work—by presenting the subject with a number of problems and seeing how many the subject is able to solve in a given amount of time (computational resource).

A subfield of computer science known as computational complexity theory is devoted to studying and classifying various problems with respect to their difficulty and to the computational resources necessary to solve them. For every class of problems, complexity theory defines a class of machines capable of solving such problems. We can apply similar ideas to classifying minds; for example, all minds capable of efficiently (Yampolskiy 2013b) solving problems in the class P (polynomial) or a more difficult class of NP-complete problems (NP, nondeterministic polynomial time; Yampolskiy 2011b). Similarly, we can talk about minds with general intelligence belonging to the class of AI-Complete (Yampolskiy 2011a, 2012a, 2013c) minds, such as humans.

We can also look at the goals of different minds. It is possible to create a system that has no terminal goals, so such a mind is not motivated to accomplish things. Many minds are designed or trained for obtaining a particular high-level goal or a set of goals. We can envision a mind that has a randomly changing goal or a set of goals, as well as a mind that has many goals of different priority. Steve Omohundro used microeconomic theory to speculate about the driving forces in the behavior of superintelligent machines. He argues that intelligent machines will want to selfimprove, be rational, preserve their utility functions, prevent counterfeit utility (Yampolskiy 2014), acquire and use resources efficiently, and protect themselves. He believes that the actions of machines will be governed by rational economic behavior (Omohundro 2007, 2008). Mark Waser suggests an additional “drive” to be included in the list of behaviors predicted to be exhibited by the machines (Waser 2010). Namely, he suggests that evolved desires for cooperation and being social are part of human ethics and are a great way of accomplishing goals, an idea also analyzed by Joshua Fox and Carl Shulman, but with contrary conclusions (Fox and Shulman 2010). Although it is commonly assumed that minds with high intelligence will converge on a common goal, Nick Bostrom, via his orthogonality thesis, has argued that a system can have any combination of intelligence and goals (Bostrom 2012).

Regardless of design, embodiment, or any other properties, all minds can be classified with respect to two fundamental but scientifically poorly defined properties: free will and consciousness. Both descriptors suffer from an ongoing debate regarding their actual existence or explanatory usefulness. This is primarily a result of the impossibility to design a definitive test to measure or even detect said properties, despite numerous attempts (Hales 2009; Aleksander and Dunmall 2003; Arrabales, Ledezma, and Sanchis 2008), or to show that theories associated with them are somehow falsifiable. Intuitively, we can speculate that consciousness, and maybe free will, are not binary properties but rather continuous and emergent abilities commensurate with a degree of general intelligence possessed by the system or some other property we shall term mindness. Free will can be said to correlate with a degree to which behavior of the system cannot be predicted (Aaronson 2013). This is particularly important in the design of artificially intelligent systems, for which inability to predict their future behavior is a highly undesirable property from the safety point of view (Yampolskiy 2013a, 2013d). Consciousness, on the other hand, seems to have no important impact on the behavior of the system, as can be seen from some thought experiments supposing existence of “consciousless” intelligent agents (Chalmers 1996). This may change if we are successful in designing a test, perhaps based on observer impact on quantum systems (Gao 2002), to detect and measure consciousness.

To be social, two minds need to be able to communicate, which might be difficult if the two minds do not share a common communication protocol, common culture, or even common environment. In other words, if they have no common grounding, they do not understand each other. We can say that two minds understand each other if, given the same set of inputs, they produce similar outputs. For example, in sequence prediction tasks (Legg 2006), two minds have an understanding if their predictions are the same regarding the future numbers of the sequence based on the same observed subsequence. We can say that a mind can understand another mind’s function if it can predict the other’s output with high accuracy. Interestingly, a perfect ability by two minds to predict each other would imply that they are identical and that they have no free will as defined previously.

 

A SURVEY OF TAXONOMIES

Yudkowsky describes the map of mind design space as follows: “In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire map floats in a still vaster space, the space of optimization processes” (Yudkowsky 2008, 311). Figure 2.1 illustrates one possible mapping inspired by this description. Similarly, Ivan Havel writes:

All conceivable cases of intelligence (of people, machines, whatever) are represented by points in a certain abstract multidimensional “super space” that I will call the intelligence space (shortly IS). Imagine that a specific coordinate axis in IS is assigned to any conceivable particular ability, whether human, machine, shared, or unknown (all axes having one common origin). If the ability is measurable the assigned axis is endowed with a corresponding scale. Hypothetically, we can also assign scalar axes to abilities, for which only relations like “weaker-stronger,” “better-worse,” “less-more” etc. are meaningful; finally, abilities that may be only present or absent may be assigned with “axes” of two (logical) values (yes-no). Let us assume that all coordinate axes are oriented in such a way that greater distance from the common origin always corresponds to larger extent, higher grade, or at least to the presence of the corresponding ability. The idea is that for each individual intelligence (i.e. the intelligence of a particular person, machine, network, etc.), as well as for each generic intelligence (of some group) there exists just one representing point in IS, whose coordinates determine the extent of involvement of particular abilities. (Havel 2013, 13)

Screen Shot 2015-09-02 at 6.08.18 AM

If the universe (or multiverse) is infinite, as our current physics theories indicate, then all possible minds in all possible states are instantiated somewhere (Bostrom 2006). Ben Goertzel proposes the following classification of kinds of minds, mostly centered on the concept of embodiment (Geortzel 2006):

  • Singly embodied: controls a single physical or simulated system
  • Multiply embodied: controls a number of disconnected physical or simulated systems
  • Flexibly embodied: controls a changing number of physical or simulated systems
  • Nonembodied: resides in a physical substrate but does not utilize the body in a traditional way
  • Body centered: consists of patterns emergent between the physical system and the environment
  • Mindplex: consists of a set of collaborating units, each of which is itself a mind (Goertzel 2003)
  • Quantum: is an embodiment based on properties of quantum physics
  • Classical: is an embodiment based on properties of classical physics

 

J. Storrs Hall (2007a), in his “Kinds of Minds,” suggests that different stages to which a developing AI might belong can be classified relative to its humanlike abilities. His classification encompasses the following:

  • Hypohuman: infrahuman, less-than-human capacity
  • Diahuman: human-level capacities in some areas but still no general intelligence
  • Parahuman: similar but not identical to humans, as for example, augmented humans
  • Allohuman: as capable as humans, but in different areas
  • Epihuman: slightly beyond the human level
  • Hyperhuman: much more powerful than human; superintelligent (Hall 2007a; Yampolskiy and Fox 2012)

 

Patrick Roberts, in his book Mind Making, presents his ideas for a “taxonomy of minds”; I leave it to the reader to judge the usefulness of his classification (Roberts 2009):

  • Choose means: Does it have redundant means to the same ends? How well does it move between them?
  • Mutate: Can a mind naturally gain and lose new ideas in its lifetime?
  • Doubt: Is it eventually free to lose some or all beliefs? Or, is it wired to obey the implications of every sensation?
  • Sense itself: Does a mind have the senses to see the physical conditions of that mind?
  • Preserve itself: Does a mind also have the means to preserve or reproduce itself?
  • Sense minds: Does a mind understand a mind, at least of lower classes, and how well does it apply that to itself, to others?
  • Sense kin: Can it recognize the redundant minds, or at least the bodies of minds, with which it was designed to cooperate?
  • Learn: Does the mind’s behavior change from experience? Does it learn associations?
  • Feel: We imagine that an equally intelligent machine would lack our conscious experience.
  • Communicate: Can it share beliefs with other minds?

 

Kevin Kelly has also proposed a taxonomy of minds that in his implementation is really just a list of different minds, some of which have not appeared in other taxonomies (Kelly 2007b): mind with operational access to its source code; general intelligence without self-awareness; self-awareness without general intelligence; superlogic machine without emotion; mind capable of imagining greater mind; self-aware mind incapable of creating a greater mind; mind capable of creating a greater mind, which creates a greater mind; very slow, distributed mind over large physical distance; mind capable of cloning itself and remaining in unity with clones; global mind, which is a large supercritical mind of subcritical brains; and anticipator, mind specializing in scenario and prediction making (Kelly 2007b).

Elsewhere, Kelly provides much relevant analysis of the landscape of minds and writes about “Inevitable Minds” (Kelly 2009), “The Landscape of Possible Intelligences” (Kelly 2008a), “What Comes After Minds?” (Kelly 2008b), and “The Evolutionary Mind of God” (Kelly 2007a).

Aaron Sloman, in “The Structure of the Space of Possible Minds,” using his virtual machine model, proposes a division of the space of possible minds with respect to the following properties (Sloman 1984):

  • Quantitative versus structural
  • Continuous versus discrete
  • Complexity of stored instructions
  • Serial versus parallel
  • Distributed versus fundamentally parallel
  • Connected to external environment versus not connected
  • Moving versus stationary
  • Capable of modeling others versus not capable
  • Capable of logical inference versus not capable
  • Fixed versus reprogrammable
  • Goal consistency versus goal selection
  • Metamotives versus motives
  • Able to delay goals versus immediate goal following
  • Statics plan versus dynamic plan
  • Self-aware versus not self-aware

 

MIND CLONING AND EQUIVALENCE TESTING ACROSS SUBSTRATES

The possibility of uploads rests on the ideas of computationalism (Putnam 1980), specifically substrate independence and equivalence, meaning that the same mind can be instantiated in different substrates and move freely between them. If your mind is cloned and if a copy is instantiated in a different substrate from the original one (or on the same substrate), how can it be verified that the copy is indeed an identical mind? Can it be done at least immediately after cloning and before the mind-clone learns any new information? For that purpose, I propose a variant of a Turing test, which also relies on interactive text-only communication to ascertain the quality of the copied mind. The text-only interface is important not to prejudice the examiner against any unusual substrates on which the copied mind might be running. The test proceeds by having the examiner (original mind) ask questions of the copy (cloned mind), questions that supposedly only the original mind would know answers to (testing should be done in a way that preserves privacy). Good questions would relate to personal preferences, secrets (passwords, etc.), as well as recent dreams. Such a test could also indirectly test for consciousness via similarity of subjective qualia. Only a perfect copy should be able to answer all such questions in the same way as the original mind. Another variant of the same test may have a third party test the original and cloned mind by seeing if they always provide the same answer to any question. One needs to be careful in such questioning not to give undue weight to questions related to the mind’s substrate as that may lead to different answers. For example, asking a human if he or she is hungry may produce an answer different from the one that would be given by a nonbiological robot.

 

CONCLUSIONS

Science periodically experiences a discovery of a whole new area of investigation. For example, observations made by Galileo Galilei led to the birth of observational astronomy (Galilei 1953), also known as the study of our universe; Watson and Crick’s discovery of the structure of DNA led to the birth of the field of genetics (Watson and Crick 1953), which studies the universe of blueprints for organisms; and Stephen Wolfram’s work with cellular automata has resulted in “a new kind of science” (Wolfram 2002) that investigates the universe of computational processes. I believe that we are about to discover yet another universe: the universe of minds.

As our understanding of the human brain improves, thanks to numerous projects aimed at simulating or reverse engineering a human brain, we will no doubt realize that human intelligence is just a single point in the vast universe of potential intelligent agents comprising a new area of study. The new field, which I would like to term intellectology, will study and classify design space of intelligent agents, work on establishing limits to intelligence (minimum sufficient for general intelligence and maximum subject to physical limits), contribute to consistent measurement of intelligence across intelligent agents, look at recursive self-improving systems, design new intelligences (making AI a subfield of intellectology), and evaluate the capacity for understanding higher-level intelligences by lower-level ones. At the more theoretical level, the field will look at the distribution of minds on the number line and in the mind design space, as well as attractors in the mind design space. It will consider how evolution, drives, and design choices have an impact on the density of minds in the space of possibilities. The field will not be subject to the current limitations brought on by the human-centric view of intelligence and will open our understanding to seeing intelligence as a fundamental computational resource, such as space or time. Finally, I believe intellectology will highlight the inhumanity of most possible minds and the dangers associated with such minds being placed in charge of humanity.

 

REFERENCES

Aaronson, Scott. 2013. The Ghost in the Quantum Turing Machine. arXiv preprint arXiv:1306.0159.

Aleksander, Igor and Barry Dunmall. 2003. Axioms and tests for the presence of minimal consciousness in agents I: preamble. Journal of Consciousness Studies 10(4–5):4–5.

Arrabales, Raúl, Agapito Ledezma, and Araceli Sanchis. 2008. ConsScale: a plausible test for machine consciousness? Proceedings of the Nokia Workshop on Machine Consciousness—13th Finnish Artificial Intelligence Conference (STeP 2008), Helsinki, Finland, pp. 49–57.

Ashby, Leif H. and Roman V. Yampolskiy. July 27–30, 2011. Genetic algorithm and wisdom of artificial crowds algorithm applied to Light up. Paper presented at the 2011 16th International Conference on Computer Games (CGAMES), Louisville, KY.

Bostrom, Nick. 2003. Are you living in a computer simulation? Philosophical Quarterly 53(211):243–255.

Bostrom, Nick. 2006. Quantity of experience: brain-duplication and degrees of consciousness. Minds and Machines 16(2):185–200.

Bostrom, Nick. 2012. The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Minds and Machines 22(2):71–85. Cattell, Rick and Alice Parker. 2012. Challenges for brain emulation: why is it so difficult? Natural Intelligence 1(3):17–31.

Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford, UK: Oxford University Press.

de Garis, Hugo, Chen Shuo, Ben Goertzel, and Lian Ruiting. 2010. A world survey of artificial brain projects. Part I: large-scale brain simulations. Neurocomputing 74(1–3):3–29. doi:http://dx.doi.org/10.1016/j.neucom.2010.08.004

De Simone, Andrea, Alan H. Guth, Andrei Linde, Mahdiyar Noorbala, Michael P. Salem, and Alexander Vilenkin. 2010. Boltzmann brains and the scalefactor cutoff measure of the multiverse. Physical Review D 82(6):063520.

Fox, Joshua and Carl Shulman. October 4–6, 2010. Superintelligence Does Not Imply Benevolence. Paper presented at the Eighth European Conference on Computing and Philosophy, Munich, Germany. Fredkin, Edward. 1982.

On the Soul. Unpublished manuscript. Galilei, Galileo. 1953.

Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican. Oakland: University of California Press. Gao, Shan. 2002. A quantum method to test the existence of consciousness. The Noetic Journal 3(3):27–31.

Goertzel, Ben. 2003. Mindplexes: the potential emergence of multiple levels of focused consciousness in communities of AI’s and humans. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2003/mindplex.htm

Goertzel, Ben. 2006. Kinds of minds. In The Hidden Pattern: A Patternist Philosophy of Mind, 17–25. Boca Raton, FL: BrownWalker Press.

Goertzel, Ben, Ruiting Lian, Itamar Arel, Hugo de Garis, and Shuo Chen. 2010. A world survey of artificial brain projects. Part II: biologically inspired cognitive architectures. Neurocomputing 74(1–3):30–49. doi:10.1016/j.neucom.2010.08.012

Hales, Colin. 2009. An empirical framework for objective testing for P-consciousness in an artificial agent. Open Artificial Intelligence Journal 3:1–15.

Hall, J. Storrs. 2007a. Kinds of minds. In Beyond AI: Creating the Conscience of the Machine, 241–248. Amherst, NY: Prometheus Books. Hall, J. Storrs. October 2007b. Self-improving AI: an analysis. Minds and Machines 17(3):249–259.

Hanson, Robin. 1994. If uploads come first. Extropy 6(2): 10–15.

Havel, Ivan M. 2013. On the way to intelligence singularity. In Beyond Artificial Intelligence, edited by Jozef Kelemen, Jan Romportl, and Eva Zackova, 3–26. Berlin: Springer.

Herzing, Denise L. 2014. Profiling nonhuman intelligence: an exercise in developing unbiased tools for describing other “types” of intelligence on earth. Acta Astronautica 94(2):676–680. doi:http://dx.doi.org/10.1016/j. actaastro.2013.08.007

Hughes, Ryan, and Roman V. Yampolskiy. 2013. Solving sudoku puzzles with wisdom of artificial crowds. International Journal of Intelligent Games and Simulation 7(1):6.

Kelly, Kevin. 2007a. The Evolutionary Mind of God. http://kk.org/thetechnium/ archives/2007/02/the_evolutionar.php Kelly, Kevin. 2007b. A Taxonomy of Minds. http://kk.org/thetechnium/ archives/2007/02/a_taxonomy_of_m.php

Kelly, Kevin. 2008a. The Landscape of Possible Intelligences. http://kk.org/ thetechnium/archives/2008/09/the_landscape_o.php

Kelly, Kevin. 2008b. What Comes After Minds? http://kk.org/thetechnium/ archives/2008/12/what_comes_afte.php Kelly, Kevin. 2009. Inevitable Minds. http://kk.org/thetechnium/archives/2009/04/ inevitable_mind.php Kolmogorov, A. N. 1965. Three approaches to the quantitative definition of information. Problems of Information Transmission 1(1):1–7.

Krioukov, Dmitri, Maksim Kitsak, Robert S. Sinkovits, David Rideout, David Meyer, and Marián Boguñá. 2012. Network cosmology. Science Reports 2. doi:http://www.nature.com/srep/2012/121113/srep00793/abs/ srep00793.html#supplementary-information

Lanza, Robert. 2007. A new theory of the universe. American Scholar 76(2):18. Legg, Shane. 2006. Is There an Elegant Universal Theory of Prediction? Paper presented at Algorithmic Learning Theory. 17th International Conference on Algorithmic Learning Theory, Barcelona, Spain, October 7–10, 2006.

Legg, Shane and Marcus Hutter. December 2007. Universal intelligence: a definition of machine intelligence. Minds and Machines 17(4):391–444.

Levin, Leonid. 1973. Universal search problems. Problems of Information Transmission 9(3):265–266. Lloyd, Seth. 2000. Ultimate physical limits to computation. Nature 406:1047–1054. Miller, M. S. P. July 4–6, 2012. Patterns for Cognitive Systems. Paper presented at the 2012 Sixth International Conference on Complex, Intelligent and Software Intensive Systems (CISIS), Palermo, Italy.

Omohundro, Stephen M. September 8–9, 2007. The Nature of Self-Improving Artificial Intelligence. Paper presented at the Singularity Summit, San Francisco.

Omohundro, Stephen M. February 2008. The Basic AI Drives. In Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, 483–492. Amsterdam: IOS Press.

Port, Aaron C. and Roman V. Yampolskiy. July 30–August 1, 2012. Using a GA and wisdom of artificial crowds to solve solitaire battleship puzzles. Paper presented at the 2012 17th International Conference on Computer Games (CGAMES), Louisville, KY. Putnam, Hilary. 1980. Brains and behavior. Readings in Philosophy of Psychology 1:24–36.

Rice, Henry Gordon. 1953. Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society 74(2):358–366. Roberts, Patrick. 2009. Mind Making: The Shared Laws of Natural and Artificial. North Charleston, SC: CreateSpace.

Sloman, Aaron. 1984. The structure of the space of possible minds. In The Mind and the Machine: Philosophical Aspects of Artificial Intelligence, 35–42. Chichester, UK: Ellis Horwood.

Sotala, Kaj and Harri Valpola. 2012. Coalescing minds: brain uploading-related group mind scenarios. International Journal of Machine Consciousness 4(1):293–312. doi:10.1142/S1793843012400173 Universal Turing Machine. 2011. Accessed April 14. http://en.wikipedia.org/ wiki/Universal_Turing_machine

Vernon, D., G. Metta, and G. Sandini. 2007. A survey of artificial cognitive systems: implications for the autonomous development of mental capabilities in computational agents. IEEE Transactions on Evolutionary Computation 11(2):151–180. doi:10.1109/tevc.2006.890274

Waser, Mark R. March 5–8, 2010. Designing a Safe Motivational System for Intelligent Machines. Paper presented at the Third Conference on Artificial General Intelligence, Lugano, Switzerland. Watson, James D. and Francis H. C. Crick. 1953. Molecular structure of nucleic acids. Nature 171(4356):737–738.

Wolfram, Stephen. May 14, 2002. A New Kind of Science. Oxfordshire, UK: Wolfram Media.

Yampolskiy, R. V. 2011a. AI-Complete CAPTCHAs as zero knowledge proofs of access to an artificially intelligent system. ISRN Artificial Intelligence no. 271878.

Yampolskiy, Roman V. 2011b. Construction of an NP problem with an exponential lower bound. Arxiv preprint arXiv:1111.0305

Yampolskiy, Roman V. April 21–22, 2012a. AI-Complete, AI-Hard, or AI-Easy— Classification of Problems in AI. Paper presented at the 23rd Midwest Artificial Intelligence and Cognitive Science Conference, Cincinnati, OH.

Yampolskiy, Roman V. 2012b. Leakproofing singularity—artificial intelligence confinement problem. Journal of Consciousness Studies (JCS) 19(1–2):194–214.

Yampolskiy, Roman V. 2013a. Artificial intelligence safety engineering: why machine ethics is a wrong approach. In Philosophy and Theory of Artificial Intelligence, 389–396. Berlin: Springer.

Yampolskiy, Roman V. 2013b. Efficiency theory: a unifying theory for information, computation and intelligence. Journal of Discrete Mathematical Sciences and Cryptography 16(4–5):259–277.

Yampolskiy, Roman V. 2013c. Turing test as a defining feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics—In the Footsteps of Alan Turing, edited by Xin-She Yang, 3–17. Berlin: Springer.

Yampolskiy, Roman V. 2013d. What to do with the singularity paradox? In Philosophy and Theory of Artificial Intelligence, 397–413. Berlin: Springer.

Yampolskiy, Roman V. 2014. Utility function security in artificially intelligent agents. Journal of Experimental and Theoretical Artificial Intelligence (JETAI) 1–17.

Yampolskiy, Roman V., Leif Ashby, and Lucas Hassan. 2012. Wisdom of artificial crowds—a metaheuristic algorithm for optimization. Journal of Intelligent Learning Systems and Applications 4(2):98–107.

Yampolskiy, Roman V., and Joshua Fox. 2012. Artificial general intelligence and the human mental model. In Singularity Hypotheses, 129–145. Berlin: Springer.

Yampolskiy, Roman and Marina Gavrilova. 2012. Artimetrics: biometrics for artificial entities. IEEE Robotics and Automation Magazine (RAM) 19(4):48–58.

Yampolskiy, Roman V., Brendan Klare, and Anil K. Jain. December 12–15, 2012. Face Recognition in the Virtual World: Recognizing Avatar Faces. Paper presented at the 2012 11th International Conference on Machine Learning and Applications (ICMLA), Boca Raton, FL. Yonck, Richard. 2012. Toward a standard metric of machine intelligence. World Future Review 4(2):61–70.

Yudkowsky, Eliezer S. 2001. Creating Friendly AI—The Analysis and Design of Benevolent Goal Architectures. http://singinst.org/upload/CFAI.html

Yudkowsky, Eliezer. May 13, 2006. The Human Importance of the Intelligence Explosion. Paper presented at the Singularity Summit at Stanford, Palo Alto, CA.

Yudkowsky, Eliezer. 2008. Artificial intelligence as a positive and negative factor in global risk. In Global Catastrophic Risks, edited by N. Bostrom and M. M. Cirkovic, 308–345. Oxford, UK: Oxford University Press.

###

This is an excerpt from Roman’s excellent book, Artificial Superintelligence: A Futuristic Approach, republished with permission. Get it on Amazon here and use this discount coupon for a special price only available to h+ Magazine readers.

SAVE 20% when you order online and enter Promo Code AVP17

FREE standard shipping when you order online.

 

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching ProfessorProfessor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA. After completing his PhD dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at theUniversity at Buffalo. Dr. Yampolskiy is an alumnus of Singularity University (GSP2012) and a Visiting Fellow of the Singularity Institute (Machine Intelligence Research Institute).

Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), on radio (German National Radio, Swedish National Radio, Alex Jones Show) and TV. Dr. Yampolskiy’s research has been featured 250+ times in numerous media reports in 22 languages.

Roman’s website:  cecs.louisville.edu/ry

Screen Shot 2015-09-02 at 7.27.37 AM

You may also like...

https://phuonghoangschool.com/wp-includes/nexus-slot/