Ape Brain Narcissism Misses the Singularity: An Artificial Life View

Since 1996, I have developed a simulation called Noble Ape. It creates a rich biological ecosystem and simulates the cognitive behavior of the ape-like creatures that inhabit the biological simulation. The Noble Ape cognitive simulation is used by both Apple and Intel as a processor metric. The unit BCPS ((Noble Ape) Brain Cycles Per Second) is demonstrated both internally and to third party developers. In this regard, I have become intimately acquainted with contemporary computing and the immense power we seem to ignore.

 

Since 2005, I have been the editor of Biota.org. The site brings together a number of simulation authors, academics and users broadly described as the artificial life community. Artificial life offers an applied philosophy through computation. There are a number of fascinating problems ideally suited to the kinds of applied philosophy artificial life explores. One of my favorites relates to the intelligence of vastly complex systems.

For more than a decade the artificial life community has developed metrics for computer power, complex systems intelligence and a broader metaphysics that is distinctly different from what we hear from most advocates of "The Singularity movement." When contrasted with theoretical Singularity works like Nick Bostrom’s "Are We Living In A Computer Simulation," artificial life challenges Singularity thinking in a number of ways. I would like to offer two particular challenges.

Tom Barbalet - Creator of Noble Ape

1: Survival is a far better metric of intelligence than replicating human intelligence, and…

2: There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence.

These challenges have not come through a priori philosophical posturing but are the result of years of simulation and the iterative understanding and discourse that has come through the artificial life community.

The primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide — the division between humans and all other (living) things. Like most fallacies, it provides careers and countless treatises regarding paradoxes that can be explored at great length, leading to the warm and fuzzy conclusion that the human is still on top. If only it were so.

For starters, let’s look at the pre-Cambrian period where the floating entities, our distant ancestors, began to optimize their paths between feeding grounds. The simple rule in life is that survival (to reproduction) is the only meaningful metric. In fact, as these pre-Cambrian critters, began to make well-floated paths they were starting the long road that would lead to us — and our aforementioned countless treatises of paradoxes on the primacy of human intelligence.

Noble Ape Meters - Photo credit: Noble ApeFirst Insight: Survival is intelligence.
When choosing a metric for survival intelligence, I was drawn to Teddy Roosevelt’s analysis of hunting big game in the 1900s. Roosevelt’s analysis related to the size of bullet (or caliber of bullet) required to stop a large animal. I was interested in a measure of the number of humans required to stop a vastly complex system. If there was to be a similar caliber of intelligence based on stopping a vastly complex system, why not make it a human centric metric. To paraphrase Roosevelt:

It took but ten humans to slay this system.

Due to the rough nature of the approximation, I employed a base-10 logarithmic approach. If it took a human to slay the system, the survival intelligence value would be zero. If it took ten, the survival intelligence value would be one. If it took a hundred humans, the survival intelligence value would be two.

My second insight comes from the need to normalize the definition of simulation. When the physicist, the biologist, the lawyer or the accountant goes to work, they don’t have a bright glaring light shining down on them, constantly reminding them that what they are doing is not, in fact, reality but is based on the broad constraints that have historically and intellectually been applied to them. Through my editorial duties with Biota.org, I raised the idea that simulation authors should stop holding a marked division between what they did and reality. In fact, what was needed was a pluralistic view of simulation. The definition I offered was simple:

Second Insight: A simulation is any environment with applied constraints.

This definition showed that nearly everything was fair game for simulation analysis. The legal system, the road system, the health care system, the financial system, even the internet could be analyzed and parametrized with the insights from studying simulations.

Combining the metric of intelligence for survival and the idea that nearly anything is fair game for this metric, let’s explore a couple of examples.

The legal system, the road system, the health care system, the financial system, even the internet could be analyzed with insights from studying simulations.

The road system of a given city may require between ten to a hundred human obstructions to shut it down. This is what it takes to cut off major arterial roadways and possibly other minor linking streets. Actually, it may take significantly more human obstructions to fully shut the system down, but these numbers give a good baseline metric of a survival intelligence of between one and two.

Recall the recent court cases of the polygamist group in San Angelo, Texas. Here a few hundred cases shut down the legal system in that county. This shows a metric of survival intelligence between two and three.

This survival intelligence, with the zero-base considered as a single human, highlights the sublime nature of these systems and — in contrast — our stature compared to them. These systems are created and maintained by humans. Yet their complexity and necessity merits their survival over any individual human. By all meaningful descriptions of intelligence, this should show that these systems are incomparably more intelligent than individual humans. Conversely the value of individual humans is substantially diminished in the maintenance of these systems.

BiotaThere is a lack of scholarship in this area. This is, in large part, because most ideas about intelligence are deeply and fallaciously interconnected with an assumed understanding of human intelligence. Since scholarship with regards to these vastly complex systems is lacking, scholarship associated with our place amongst these systems is also lacking. As passive maintaining agents of these systems, we must wonder if we need to redefine our ideas about human purpose in this context.

I offer this as a basic sketch of ideas that I have evolved as the editor of Biota.org and the creator of the Noble Ape Simulation. The time for new scholarship relating to these issues is overdue.

You may also like...

60 Responses

  1. Anonymous says:

    or you could think of a system’s stupidity as the number of humans required to maintain it

  2. W. says:

    In redefining ‘intelligence’ as ‘survival’ (“Survival is intelligence”) you are merely assigning to the term ‘intelligence’ a definition that is not commonly, if ever, used.

    “The simple rule in life is that survival (to reproduction) is the only meaningful metric”.

    This statement makes it clear that you want to redefine ‘intelligence’ such that it is synonymous with ‘Darwinian fitness’. That’s fine in and of itself, but the claim that

    “These challenges [Challenges 1 and 2, which we will address shortly] have not come through a priori philosophical posturing but are the result of years of simulation and the iterative understanding and discourse that has come through the artificial life community”,

    is problematic. You cannot arrive at a new definition of a term a posteriori – by somehow ‘discovering’ a new meaning for it. You can clarify the concept behind a word, but this remains tightly within the realm of that which is a priori, by definition. And if you wish to assign a new definition to an existent term, then you must stipulate that definition, you cannot ‘discover’ it by empirical means or empirically prove that such a definition must somehow obtain by logical necessity. For this reason, your insight as to the needed redefinition of ‘intelligence’ cannot have been the

    “result of years of simulation and the iterative understanding and discourse that has come through the artificial life community”.

    You will be in danger of begging the question if you redefine a term and then claim to have arrived at your redefinition through empirical means. The meaning of a term is generally fixed by its usage throughout history. If you try to assign a new meaning to a term by ‘expanding upon’ its definition, what you are actually usually trying to do is draw a comparison between the normal usage of the term and the new meaning you would like to assign to it. In this case, you are trying to draw a comparison between ‘Darwinian fitness’ and ‘intelligence’, as the latter term is either commonly or technically used. The greatest danger in doing this is the risk of making empty or erroneous statements, of which you make both in this article.

    The practice of clarifying the concept of a term by comparing it to another term and showing how the two are related can be useful, but, as mentioned, this practice remains within the domain of the a priori, and if you try prove a thesis that is based upon the redefinition of a term by empirical means, you are going to make conceptual errors and you are going to fail to accomplish what you intended to accomplish.

    Your first major thesis (“challenge”) is that

    “1: Survival is a far better metric of intelligence than replicating human intelligence”

    In this statement you are implicitly admitting that ‘survival’ and ‘intelligence’ are in fact distinct concepts, and hence distinct terms. It’s one thing to say that “survival is intelligence”, as you do in the heading “First Insight: Survival is intelligence”, and it’s quite another thing to say that survival is “a metric of intelligence”, which you say in ‘Challenge 1’. Obviously ‘intelligence’ is not itself a metric of ‘intelligence’ (i.e. intelligence is not a metric of itself), but if you hold that 1) survival (i.e. one’s/something’s capacity for survival) is intelligence and that 2) survival is a metric of intelligence, then you must also hold that 3) intelligence is a metric of intelligence, which is clearly not something you intended to imply.

    Your second major thesis (challenge) is that

    “2: There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence”.

    One of your objectives in this article is apparently to expose the ‘fact’ that

    “The primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide — the division between humans and all other (living) things. Like most fallacies, it provides careers and countless treatises regarding paradoxes that can be explored at great length, leading to the warm and fuzzy conclusion that the human is still on top.”

    Here you explicitly assert that the claim that human intelligence stands in a position of primacy relative to the intelligence of other (living) things is a fallacy. But the claim that human intelligence stands in a position of primacy relative to other (living) things is only made to be fallacious if you use the term ‘intelligence’ in two different ways; in other words, this claim is only fallacious if you equivocate in your usage of the term ‘intelligence’ when making this claim. But those who ‘make careers’ based upon this claim are not necessarily equivocating in their usage of the term ‘intelligence’, but it is clear that you are equivocating in your usage and interpretation of the term ‘intelligence’, and thus it is clear that your claims (rather than theirs) are fallacious.

    As I stated previously, the greatest danger of redefining a term and then claiming to have arrived at your redefinition through empirical means is that you will make empty or erroneous statements. I have already pointed out two reasoning errors made in this article. But your two major theses (challenges):

    “1: Survival is a far better metric of intelligence than replicating human intelligence, and…

    2: There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence”

    are largely empty statements, although these statements are not without their errors either.

    Challenge 1 says that determining the relative capacity of any non-human thing to survive is a far better metric of that thing’s intelligence than is comparing its intelligence to human intelligence. We have already gone over the erroneous conclusion that ‘intelligence is a metric of intelligence’ that is necessarily implied by this statement in combination with the statement that “survival is intelligence”. But Challenge 1 is empty in the sense that you are merely begging the question: your reasoning is circular and thus yields no new knowledge. First (logically first) you define ‘intelligence’ as ‘survival’ (loosely, as the relative ability of something or someone to survive) and then you assert that “survival is a far better metric of intelligence than replicating human intelligence”. This proves nothing. Challenge 1 is essentially a restatement of your pre-stipulated definition of intelligence. Further, you cannot supply any amount of evidence that will be sufficient to prove this thesis to be conceptually sound. Whether or not the ability of something to survive is in fact a far better metric of intelligence than the extent to which something’s intelligence matches up to human intelligence depends upon your definition of ‘survival’ and ‘intelligence’. This is a conceptual and not an empirical matter. To show this more clearly, consider the fact that I can restate Challenge 1 in the following way with relatively little loss in overall meaning: “ ‘Fitness’ is a far better synonym of intelligence (as this term is applied to non-human things) than is ‘human-intelligence equivalence’ ”.

    Challenge 2 also begs the question and thus yields no new knowledge. You would be hard-pressed to find any researcher of intelligence who would categorically refute the assertion that there are a number of entities that have better survival capabilities than humans. But this is all that you are essentially saying in Challenge 2, while at the same time you are implicitly claiming that this statement helps to expose the ‘fact’ that

    “the primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide”.

    Again, we are dealing with a conceptual matter rather than an empirical matter here. Certainly you can attempt to demonstrate empirically that there exist a number of systems that are vastly superior to humans in terms of their survival capabilities, and you may even succeed in doing so. But you cannot prove by means of empirical data that the fact that there exist a number of systems that are vastly superior to humans in terms of their survival capabilities in any way logically implies that the term ‘intelligence’ ought to therefore be expanded to include ‘survival capability’. But if your thesis is merely the claim that there exist a number of systems that have superior survival capabilities in comparison to humans, then this bears no relation to

    “the primacy of human intelligence”

    as

    “one of the last and greatest myths of the anthropomorphic divide”,

    since your ability to effectively attack this ‘great myth’ via your thesis depends upon your ability to demonstrate, via your thesis, the fallaciousness of the claim that human intelligence stands in a position of primacy relative to the intelligence of other (living) things. And, to reiterate, the assertion that human intelligence stands in a position of primacy relative to the intelligence of other (living) things only becomes fallacious when you equivocate in your usage of the term ‘intelligence’, which is something that you do in this article, while it is not necessarily something that advocates of this claim do.

    Since you characterize your two main theses as “challenges” to the claim of the “primacy of human intelligence”, all of the aforementioned points are critical to the overall success of your paper.

  3. Alan R. Light says:

    Well, damn. Others have made the exact same point I was going to make before I could, and Andrew gave exactly the same examples I was going to give: that there are rocks around that are two billion years old, and that the sun will likely outlast all of us.

    I certainly understand the value of survivabiity, and I consider survivability the single most important component of ethics, at least when applied to a group. But to link survivability to intelligence like this requires a complete redefinition of the word “intelligence”.

    Words don’t just mean what one person wants them to.

    • This short article is part of a much greater discussion. I decided to focus on two points in particular in this article. For more in a condensed form;

      http://www.nobleape.com/transcript/srr126.html

      Although this is a little dated and incomplete too.

      I haven’t talked to rocks in any of the prior comments because rocks aren’t vastly complex systems. The sun is an interesting one also the galaxy. I like that challenge.

      The second point you make, Alan, is exactly right. Defining survival as the quantum of intelligence appears to push the word intelligence in a different direction. I would argue, however, from the quantum of survival there is a far greater potential spread of meanings of intelligence but it doesn’t exclude any of the popular meanings discussed in the comments. This definition doesn’t destroy any aspect of human intelligence as intelligence, it just provides a framework to move from the road system to Obama to potentially the galaxy.

  4. Ben says:

    Certain aspects of human intelligence can be combined with these systems to improve them; exploration, not exclusivity.

  5. Anonymous says:

    Even more disturbing… in order to “stop” the President of the United States (i.e. remove him/her from office), requires a majority of both the Senate and House of Representatives, or 285 people, voting in favor. I think most people would agree that we’ve had at least some presidents who aren’t 2 orders of magnitude smarter than the average human…

    ~~

    • Anonymous says:

      Actually, to stop the president, it isn’t the 285 people, but the influence of the individuals behind those 285 people. I would also stipulate that you would be removing “the President”, not the individual behind that title.

      Moral transgressions in an economic boom – Clinton was almost removed.
      Trillion dollar debt, Recession, Unemployment – Bush is still being defended.

      (I am not stipulating political sides, just keeping in constraints of the original analogy)

      To truly mark the order of magnitude, you would need to adjust for all influences in the process, not simply the individuals, both for and against the process.

  6. mobius says:

    It is interesting that this word “intelligence” causes so much contention. I wrote about a debate about plant intelligence in an article to published in a volume of scholarship someday – Here it is, with creative commons license 2.5
    http://www.box.net/shared/h0nqb1c6o0

    I think the survivability metric is a good one for building alife systems, and certainly not a bad one for growing other living systems. Of course Darwinian evolution includes sexual selection, so we must do more than survive – we must seduce, intrigue and leave progeny. But that takes nothing way from survivability as working metric for evaluating intelligent systems – it merely contextualizes it. Metrics are tools for helping us achieve results, not definitions of our ontology.

    • xrr says:

      “as working metric for evaluating intelligent system”
      It’s funny how you want to stick to the word “intelligent” to describe something that is not 🙂 Nobody argues the fact that the survivability metric is an interesting one and of some value to build resilient systems , but again, this systems are not “intelligent” in any way because of that. The road system cited by tom is obviously resilient as a system, but the only intelligence behind the system is the Human one ! As far as I know, the system hasn’t evolved by itself from roman roads to 5 lines highways 🙂
      And for plants, (I haven’t read your own article yet) the faculty to adapt and survive is not Intelligence !

  7. Anonymous says:

    So how does this redefinition help you actually accomplish the goal of creating a system that is intelligent in the way most of us define it, as in, it can understand and solve new problems?

    And lets be real here – I could probably kill Stephen Hawking with my bare hands, but there’s no way anyone in the world would say I’m more intelligent than he is. Your new definition seems kind of misplaced.

  8. Steve says:

    Sytem survivability certainly has been studied. Soviet Red Army military science revolved around the concept of correlation of forces. Derived from Harvard Business School teachings, the idea was to assign a value to every facet of a military unit. The morale of the individual soldiers, the destructive force of a round of ammunition, and the capacity of a supply truck all contributed a number that gauged survivability at theater-wide scales.

    The notion failed in the face of asymmetrical warfare. The ability of an outclassed, but thinking enemy to concoct novel, destructive solutions could not be measured with calipers and a straight edge.

  9. xrr says:

    “1: Survival is a far better metric of intelligence than replicating human intelligence, and…”

    Unfortunately that’s a principle in science, put a bad assumption first and all the rest of the reasoning is biased/wrong.

    Of course Intelligence is not evident to define and specially when it applies to human intelligence and the way to measure it. But, the word itself comes from latin “intellegentia”: the ability to understand/comprehend…

    Unfortunately for your theories, yes, we, as the human race, are the most intelligent specy/entity (known to us at least). Only because we are the only ones to be in the position of trying to define what “Intelligence” is and discuss about it.

    “These challenges have not come through a priori philosophical posturing but are the result of years of simulation and the iterative understanding and discourse that has come through the artificial life community.”

    This is a very funny statement. Artificial life/intelligence community, as you name it, has struggled (so far) to even get close to the smallest sign of intelligence. Sounds like some are tempted by something like “hey, So why not redefine intelligence to something less challenging we can probably achieve : let’s make it the ability of systems to survive, yeah great…”

    “The primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide — the division between humans and all other (living) things. Like most fallacies, it provides careers and countless treatises regarding paradoxes that can be explored at great length, leading to the warm and fuzzy conclusion that the human is still on top. If only it were so.”

    To the contrary of what you’re saying, yes, this is a pseudo-philosophical posturing and in-line with some “concerns” of the moment like all the problems raised by uncontrolled human development BUT the human species is the first ever in the history of Earth to actually realize, thanks to its intelligence, that its own development could lead to mass extinctions (and even its own) and the first to actually care about other species.

    So even using your own criterias, we might be, one of the most resilient species ever because we can ACT on our destiny more than any other.
    And if we fail, it will make space for others ! 😉

    Sorry for the approximate English and all the best from France :.

    I’ll finish with a citation of Carl Sagan : “We are a way for the universe to know itself

  10. Anonymous says:

    Let’s say entity A exists within another entity B. The enclosing entity B could be considered the enclosed entity A’s “environment”).

    Let’s say the respective entropies are Ea and Eb.

    Then I’d venture, that A is more “intelligent” than B if Ea > Eb, or vice versa. In other words, entropy (which is basically a measure of the amount of information sequestered within an entity, and hence a measure of its complexity) should be the sole metric of intelligence.

  11. Anonymous says:

    Interesting premise but faulty logic. In the examples you gave you indicated that a road system could be brought down by a number of human obstructions, and a court system could be blocked by a number of human cases. In each of these cases the relevant quantity is obstructions and cases, not humans. It is fallacious to then substitute humans in for either of these terms in order to continue the argument.

  12. James Clements says:

    I have some questions:
    Should we instead define intelligence based on the ability to wonder about how to define intelligence? Can a non-sentient entity be intelligent? Is a complex system that sentient entities use and maintain intelligent or just a tool that could be scrapped when something better comes along? The word ‘intelligence’ has specific meanings related to mental processes so why use it instead of words like ‘durability’, ‘enduring’, ‘toughness’? A measure of a systems survivability is a simple metric not a complex philosophical debate, so wouldn’t that be more useful? Since the ‘intelligence’ of complex systems is based on the principles of evolution can the concept of evolution itself be considered intelligent? Which is more intelligent: a complex system or a person who thinks about the intelligence of complex systems?

  13. Rob says:

    Very interesting. I’m not so sure the property you ascribe the system should be labeled intelligence though. It sounds like the modeling you’re doing shows the resiliency of the system. Calling this property intelligence is tantamount to claiming that these systems arose through some sort of design. However, if you study the development of these systems, you see that rather than just arising out of design, there are unintended consequences that play themselves out as these systems evolve.

    These models would be great to use to determine how much stress a particular system can handle, especially in regards to natural disasters and other mass casualty situations. These tools would also be greatly helpful in determining weaknesses of systems like power, communications, oil delivery, transportation systems, etc. Countermeasures could then be put in place to better meet those stresses and strengthen the system.

    I also think you nailed it when you state that survival is the defining trait of successful evolution. I’m not so sure that should be defined as intelligence though. Intelligence accelerated as a response to changing climactic conditions early in the development of homo sapiens. This allowed early humans to adapt to their environment more successfully than other organisms. Some of the consequences of that development have been the rise of towns, cities, civilization, etc. Yet, initially, the development of intelligence allowed the humans who developed those traits to prosper and survive. In that respect intelligence is an effect, not a cause.

  14. Henrik says:

    Can’t the road system be shut down by a single person pressing some buttons at the traffic control center? Or would the crashed cars be regarded as the ones obstructing the road systems in that case? Ah yes, I think I answered me own question there.

  15. Jarrod says:

    “There is a lack of scholarship in this area.”

    Really? A lack of scholarship in defining and philosophizing about intelligence? Please don’t forget our ENTIRE WESTERN PHILOSOPHICAL HISTORY, starting with the Presocratics, through Plato and Aristotle all the way to Foucault, Derrida and Zizek and still coming. There has been endless theorizing about intelligence (and systems – see Rousseau and Spinoza, for a start) in our history that it would be a “fallacy” to ignore all of it! I’m a computer scientist but way too often become annoyed with the way CS scholars try to reinvent everything, ahistorically, here and now, using their cool new tools (toys?).

  16. lhb says:

    At first you tell us, human intelligence is not really a metric of survival, and thus should not be called intelligence anymore because we should call the ability to survive intelligence.

    Then what do we call the intelligence we are interested in, the intelligence a salamander may have in small doses and humans have in somewhat higher amount?

    Why not leave the word intelligence where it belongs, and simply call ability to survive just that – ability to survive?

  17. Reow says:

    Nice article, but you got it completely wrong. “Survival” alone isn’t a measure of intelligence; adaptability is the true measure. That is “survival in the face of change”. Better luck next time. Perhaps try an easier field?

  18. Kristian Lund says:

    Several things are too vague, at least in this article, for the metric to make sense.

    System, for example. It is pretty darn hard to stop the processes in a black hole, which hardly makes it “intelligent”, by any stretch.

    Survival, is another. When is the traffic system of a city alive and when is it dead – when traffic stops? when it becomes impossible for one car to get anywhere? when the traffic system takes as much to get started as it did to build in the first place?

    Moreover, human interaction seems a very weird and arbitrary stopping force to measure by. For seeing when traffic breaks down, counting number of humans obstructing might make sense – but the intelligence of an ant-hill is not best measured by comparing it to a single humans ability to dig the queen up!

    To escape anthropocentrism, I get the idea to look at survivability. But you also have to look at how many different challenges (at a meaningful power-level) the system can handle. Our solar system can handle quite a few humans standing in the way of various planets – but not a change in gravity…
    We call humans more intelligent than ant hives, not because we a better at surviviving our respective ecological niche or we can eradicate ants on a individual-to-community basis, but because we can handle a changing environment and come up with new solutions!
    The traffic system handles traffic congestion fine. It does nothing to prevent a nuclear bomb detonating, a fuel shortage or a “human dictator” taking over its servers. Intelligence is the ability to survive new, unknown challenges.

  19. Blindweb says:

    Interesting article. I’m constantly arguing with random U.S. college educated people about what it is to be intelligent. To me a person like Rudy Giuliani (maybe he’s a bad example I haven’t looked at his past) who doesn’t seem very intelligent, but has been very successful, has a deeper type of intelligence (survival intelligence). The word intelligent is corrupted by the wordsmiths in our society. The writers who generally come from academia give it a definition that obviously prioritizes academic intelligence.

    First time visitor linked here from Global Guerrillas blog. The comments on here remind me of a Buddhist or Taoist sage who said “words and nonsense”
    You all argue definitions as though your definition is the one true definition. The map is not the terrain. I can’t stand the limited reasoning ability of technical specialists. For example: ‘It’s not obvious at all that intelligence has its origin in survival, sexual selection seems more likely. ‘ – Because you know sexual selection has nothing to do with survival.

    • Nemus says:

      Sexual selection has everything to do with survival. It’s an intrinsic measure of an organism’s fitness, as measured by members of the same species!

  20. Anonymous says:

    So, the solar system can’t be destroyed even by a hundred billion humans. So clearly it’s more intelligent than the entire human race.

    Oh, wait a second.

  21. Robert Hitchcock says:

    Flawed idea. Horribly flawed idea.
    survival does not equal intelligence. intelligence can lead to a greater likely-hood of survival in the face of a threat, but does not guarantee survival either.
    Lets take the assumptions that Survival equals intelligence without taking anything else into consideration for a moment
    The planet… the whole universe is more intelligent than us simply because its too large for us to remove it from existence.
    Algae, because it is so small and reproduces heavily is also fairly intelligent?
    How about a destructive botnet? the more zombies in the botnet, the more intelligent it is because it can do more damage than a single person on a single computer can do?
    How about some of the frauds out there? if you can con 100 people into give you loads of money you are smarter than the people you are ripping off…. but how smart?
    How about cpus? Just like roadways they have multiple paths for data to be processed….
    How about AIDS? ignore that the virus cannot successfully attach itself to some people’s white blood cells, or that its infectiousness is relatively low, we haven’t found a way do destroy it. Even if we did, how many people will it have taken to research and develop the cure?
    I am sure there are plenty of other examples someone would want to question, but the real question is that if something is so basic that it could be applied to anything, wouldn’t that question its own legitimacy?
    In short (too late) how well some one or something can survive a given situation/human may give insight into how smart that person is or how smartly designed it is, it cannot be used alone to determine intelligence.

    • speonjosh says:

      Yeah, I think about a large rock. No one person is strong enough to destroy a rock. But a rock, should it be large enough and fall on your head, could destroy a person. Does that mean a rock is more “intelligent” than a person? Apparently, according to this definition.

      And then you get to the question of what qualifies as “destroying?” Or whatever word you want to use to indicate the opposite of “survival.”

      The article seems to use various definitions for the different examples. In the first example, the large animal is killed. You can say the animal is no longer alive. A road network and a legal system can be “shut down.” But in what way are these systems no longer alive? The pavement still is in place, the lane markers are still there. The traffic lights, presumably, are still operating. Remove the blockage and the thing resumes operation.

      I find it nonsensical.

      However, I will allow as how it’s possible that the article simply doesn’t explain it very well…..

  22. oh4real says:

    FIRST:
    IMHO, it is not appropriate to consider malicious intent to destroy a system with unintended destruction. Well designed systems are meant to act, without supervision, in a state of inertia. Consistently performing without much need to intervene to keep them running.

    Example: After you design and build a road system – assuming population/usage stays level – occasional maintenance is all that’s needed to keep it operating.

    Example: My laptop’s operating system can handle lots of simultaneous programs pretty well – until I yank out the battery.

    I think this is what is being measured: occasional maintenance vs. occasional abuse/misuse.

    SECOND:
    Also, agreed that the term ‘intelligence’ vs. robustness or non-vulnerability is a bit inappropriate – unless you are referring to the intelligence of the system designers.

    I’ll take your observations and posits a step further and refine them to represent, in the context of human endeavors, ‘institutionality’.

    When speaking of road systems, legal systems, healthcare systems, etc. – all man made and utilized – you could rephrase the questions as:

    ‘How many incidental humans did it take to bring this Institution to it’s knees?’

    Think of this as a way to measure the robustness of an institution:

    How stable is a government? (think Pakistan)
    How facile is a coup d’etat? (think Pakistan)
    How good is a Constitution if it breaks at every Constitutional crisis? (think florida)
    How sound was a company and their overnight trades/leverage scheme? (think Lehman Bros)
    How did the internet handle the traffic/news during some peak event? (think michael jackson – or not)
    How ready is a country for its ‘peacekeepers’ to pull back during conflict? (think Iraq)

    Considering how often people talk about “Institutions” of society, thanks for providing me with a new framework for thinking about the world’s “institutions”.

  23. Anonymous says:

    I think that the survival intelligence (‘robustness’?) index of 0 should probably be given to a naked, tool/technology-less human/entity.

    With tools, the survival intelligence index needs modification. However, this would require a system for assigning tools some other sort of intelligence index. It could be as simple as a re-application of the metric. ie. ‘how many naked, tool-less entities would be needed to take out an entity with said tool/technology?’. The destructive capacity of a lone, tool-less human is quite different to that of a human with the launch codes to a nuclear arsenal.

  24. Anonymous says:

    Sounds like a bunch of BS to me. Move a long now. Nothing to see here.

    • SpaceWeepul says:

      The reason I’m interested in software/hardware systems that are capable of human-like intelligence is that such intelligence would have human-like consciousness. Consciousness is one of the remaining mysteries of the world. We know a lot of stuff including what’s at the bottom of the ocean, how to make the fire that is in stars and, how to make non-stick cookware. One of the things we don’t know is how to replicate consciousness. It’s also important to note that things like legal systems and transportation systems are often simply what the users make of them. In the U.S. there used to be a strong legal basis for not torturing people. When that became inconvenient for the users of the system — poof! — the limits on torture evaporated overnight. Studies suggest that many roadways would last a scant 80 years without users maintaining them. People used to be polite and have good grammar, lol, what happened to those systems????!!?!?

  25. Scot says:

    As a software guy, I am curious about the outliers here. Like a single human that is capable of shutting down a system through the use of non-specialized behavior. Multi-faceted behavior can compromise a system due to the system’s reliance on specilization from the components. So in this discussion it seems that the components (humans) are acting within the confines of the system to collectively kill it. Isn’t this a normalized view of the system and the component’s activity within it? Any deviation from this structure should result in an undefined system response, or at least a non-measurable effect on the system.

    So if any components of a system collectively deviate from standardized behavior at a large scale, can we equate this to survival? I would argue that within nature, the Human is isolated in the behavior deviation department. The ape’s behavior tends to be part of the normalized aspects of the natural environment in which it exists. Simulation of this tends to be “simpler” than the currently impossible Human component.

  26. Anonymous says:

    All of the systems you mentioned could also be brought down by a total number of 0 people. People need to simply stop maintaining these systems and they will quickly deteriorate.

Leave a Reply