Top Five Reasons ‘The Singularity’ Is A Misnomer

 

I’m sometimes asked my view on the singularity. As the author of More Than Human: Embracing the Promise of Biological Enhancement, and a recipient of the H.G. Wells Award for Contributions to Transhumanism, people assume that I believe in this thing called The Singularity and can’t wait for it to occur.

Actually, I don’t really buy into the concept. I do believe that the coming decades and century are going to see tremendous progress in science and technology. I do think that eventually (if we don’t destroy ourselves first) we’ll be able to upload minds, to create AIs smarter than we are, and to halt, reverse, or at least dramatically reduce human aging.

None of those deserve the term ‘Singularity’, though. And none of them is likely to come as quickly as we hope, or to have impacts quite as broad as the most optimistic assessments project. Indeed, we’re living with the consequences of precursors to all of those phenomena today, and they seem, well, normal.

In mathematics, a singularity is an asymptote, a spot on the graph where the y-value goes to infinity, a place where one has divided by zero. In physics, a singularity is a space where the density of matter appears to go to infinity, where we doubt that our understanding of the laws of physics can tell us anything about the internal environment of that space. A physical singularity is surrounded by an event horizon, the surface defined by light’s inability to escape from the volume enclosed in it. We can’t see inside an event horizon.

Note the word ‘infinity’ above. And the physical impossibility of gleaning data about the inside of a black hole. No amount of progress in a normal sense, not even the creation of intelligences an umpteen quadrillion times smarter than all of humanity, qualifies.

Enough introduction – here are my top reasons why the term “The Singularity” is a misnomer.

  1. We Are Better Able to Understand the Future than At Any Time in the Past

Part of the justification for the term ‘The Singularity’ is the idea of an event horizon in the future, a point beyond which we are simply not able to predict anything about the world. We are, in theory, approaching this point rapidly. It may be just 30 or 40 years away, a looming phase transition that will lead to a world we simply cannot understand.

Well, bollocks to that, I say.

The reality is that our ability to understand the future – and the length that our “future headlights” can peer, if you will – is at an all time high. The event horizon, if there is one, is receding all the time.

How can I say this? Wouldn’t the creation of AI, or the uploading of humans into computers, or the end of aging, or the creation of self-replicating molecular nanotechnology, or some other advance change the world so fundamentally that we couldn’t understand it?

Nope.

All of those phenomena are still governed by the laws of physics. We can describe and model them through the tools of economics, game theory, evolutionary theory, and information theory. It may be that at some point humans or our descendants will have transformed the entire solar system into a living information processing entity – a Matrioshka Brain. We may have even done the same with the other hundred billion stars in our galaxy, or perhaps even spread to other galaxies.

Surely that is a scale beyond our ability to understand? Not particularly. I can use math to describe to you the limits on such an object, how much computing it would be able to do for the lifetime of the star it surrounded. I can describe the limit on the computing done by networks of multiple Matrioshka Brains by coming back to physics, and pointing out that there is a guaranteed latency in communication between stars, determined by the speed of light. I can turn to game theory and evolutionary theory to tell you that there will most likely be competition between different information patterns within such a computing entity, as its resources (however vast) are finite, and I can describe to you some of the dynamics of that competition and the existence of evolution, co-evolution, parasites, symbiotes, and other patterns we know exist.

Well, perhaps I’m stretching the word ‘understand’ here. Surely this abstract jotting down of numbers with lots of zeroes doesn’t compare with real understanding? I can’t subjectively imagine the kinds of thoughts that go on in this sort of system, can I?

No, I can’t. (Though perhaps Greg Egan, Charles Stross, or other science fiction authors who’ve written novels featuring uploads and digital beings can.) But I can describe that world to you far more accurately, than, say, someone living before Einstein, who could in turn describe it more accurately than someone living before Gutenberg, who could in turn describe it more accurately than someone living before Aristotle, and so on…

Indeed, prior to about 75,000 years ago, our ancestors quite possibly had no abstract reasoning ability, and no language beyond the simplest utterances, and certainly no mathematics. If there was ever a singularity in human history, it occurred when humans evolved complex symbolic reasoning, which enabled language and eventually mathematics and science. Homo sapiens before this point would have been totally incapable of understanding our lives today. We have a far greater ability to understand what might happen at some point 10 million years in the future than they would to understand what would happen a few tens of thousands of years in the future.

The limits of our vision are expanding, rather than contracting. The event horizon is receding and become less opaque all the time.

  1. We’ve Already Created Smarter-Than-Human Intelligences

Another common assertion is that the advent of greater-than-human intelligence will herald The Singularity. These super intelligences will be able to advance science and technology faster than unaugmented humans can. They’ll be able to understand things that baseline humans can’t. And perhaps most importantly, they’ll be able to use their superior intellectual powers to improve on themselves, leading to an upward spiral of self improvement with faster and faster cycles each time.

In reality, we already have greater-than-human intelligences. They’re all around us. And indeed, they drive forward the frontiers of science and technology in ways that unaugmented individual humans can’t.

These super-human intelligences are the distributed intelligences formed of humans, collaborating with one another, often via electronic means, and almost invariably with support from software systems and vast online repositories of knowledge.

Consider the Human Genome Project. The human genome has 3.3 billion base pairs and roughly 3 million places where it varies from person to person, spread across perhaps 20,000 genes. It’s essentially impossible for any one person to understand the full complexity of the human genome, or even to have done the work to sequence the genome. Instead, the work was spread across more than 200 principle investigators, each assisted by a number of PhD postdoctoral fellows, graduate students, and lab techs. Perhaps a thousand people worked directly on the sequencing of the genome. Today tens of thousands work on understanding what those genes mean, how they interact with one another and with the environment to create us. That effort – to understand the construction of humans from the base instructions to whole being – is orders of magnitude beyond the capabilities of any individual human mind. What is the worldwide genomics community, then, but a distributed mind, with intellectual capabilities far in excess of those of a human mind?

The same can be said for Google, where hundreds if not thousands of engineers work together to improve the search algorithms that provide relevant results to user searches. Others have pointed out that the Google search engine itself a kind of limited superintelligence – a highly narrow system that is relatively unsophisticated, but incredibly fast and broad in its ability to parse data. I would go one step further and say that the engineering team that built and continues to improve the Google search engine is itself a superintelligence, and a far more flexible one than google.com. No single human could have built google.com – there is too much complexity and sophistication in the algorithms that match and rank results based on thousands of signals of relevance, that try to pull the user’s intent out of a few words in a query, that search for spam or malicious results, and so on.

Anywhere we look in science and engineering today, from the engineering of technological artifacts like automobiles, airplanes, mobile phones, and computers, to the advancement of science through genomics, bio-informatics, proteomics, astronomy, and high energy physics, we see the boundaries being pushed not by baseline human intelligences, but by group minds. These comprise tens, hundreds, or thousands of individuals, augmented by powerful computers that store, manipulate, search, organize, and transmit petabytes or exabytes of information.

Our science and engineering is increasingly dominated by superhuman intelligences, of which individual humans are just components.

So, have we hit the singularity, with these godlike intelligences roaming around, pushing the envelope of what we can know and do? Well, maybe. But it’s not exactly what you thought, is it?

  1. Those Intelligences are Hard At Work Improving Themselves

I’m sure some of you reading this are taking me to task right now. The advent of the singularity, you might be thinking, isn’t brought on specifically by superhuman intelligence. It’s brought on by superhuman intelligence that can improve on itself, boosting its own intelligence, then using that new, greater intelligence to boost itself yet more, which makes it smarter, which allows it to boost its own intelligence yet again, ad infinitum, and eventually spiraling into godlike intelligence in an ever-accelerating spiral of self-transcendence.

Sounds like fun.

Unfortunately, the evidence suggests that it’s not that easy. After all, we have at least one example of a superintelligence producing output that feeds back into improving itself: Intel.

What? Intel?

Yes, Intel, the company co-founded by Gordon Moore, the man who coined Moore’s Law. Intel employs giant teams of humans and computers to design the next generation of its microprocessors. Faster chips mean that the computers it uses in the design become more powerful. More powerful computers mean that Intel can do more sophisticated simulations, that its CAD (computer aided design) software can take more of the burden off of the many hundreds of humans working on each chip design, and so on. There’s a direct feedback loop between Intel’s output and its own capabilities.

Intel isn’t alone in that. Corporations and teams in general have the ability to expand their brainpower in a way that humans don’t. A profitable corporation can use its earnings to hire more staff. And in many cases those staff are doing intellectual work, contributing to the collective intelligence of the entity. Google, with its billions in revenue, has gone from being 2 people in a garage to a distributed mind comprising more than 20,000 employees. Every dollar it earns can directly go to adding to its own intelligence through new employees, new hardware, or both.

Self-improving superintelligences have changed our lives tremendously, of course. But they don’t seem to have spiraled into a hard takeoff towards “singularity”. On a percentage basis, Google’s growth in revenue, in employees, and in servers have all slowed over time. It’s still a rapidly growing company, but that growth rate is slowly decelerating, not accelerating. The same is true of Intel and of the bulk of tech companies that have achieved a reasonable size. Larger typically means slower growing.

My point here is that neither superintelligence nor the ability to improve or augment oneself always lead to runaway growth. Positive feedback loops are a tremendously powerful force, but in nature (and here I’m liberally including corporate structures and the worldwide market economy in general as part of ‘nature’) negative feedback loops come into play as well, and tend to put brakes on growth.

  1. The Problems We Care About Are Much Harder than Linear Computation

But won’t the exponential increase in computing power, a factor of 100 increase in price performance per decade, ultimately blow past whatever short term issues exist? If all those people working on the design team at Intel were actually AIs, faster chips would more directly boost their performance, leading to that runaway growth in computing power, right?

Perhaps, perhaps not. There are four major issues impeding that sort of runaway growth that I know of.

The first is computational complexity that scales much faster than linearly.

The second is situations where interactions with the physical world are required.

The third are situations where computation is not the limiting factor.

The fourth are situations where natural limits exist.

Let’s look at each of those in turn.

First, most problems we care about don’t scale linearly. What I mean by that is that for most problems that are interesting to us, doubling the computing power doesn’t produce twice the results. How can that be? Because real world systems have multiple pieces and those pieces affect one another. We may be able to simulate the pieces in isolation, but that won’t give us useful answers. To understand the system we have to be simulating the interactions, which can often be much much more expensive than simulating each individual piece.

How much more expensive? If we want to address any problem at the molecular scale – problems like protein folding, simulated drug design, designing a nano-assembler, or creating and simulating new super-materials, we’re in the realms of computational biology and computational chemistry. The underlying equation that governs phenomena at this scale is the Schrödinger Equation, and it’s so computationally difficult that supercomputers today can barely solve it for anything more complex than a single hydrogen atom. What’s more, it scales exponentially. Exponential scaling is hard. Perhaps you’ve heard of the recent P vs. NP work in theoretical mathematics? P describes the set of problems that can be solved in polynomial time. That means that you take the size of your system, say, the number of particles, and raise it to some power to figure out how long it will take you. A problem which is N^2 for the number of particles, for instance, means that as you grow the number of particles by a factor N, you scale the computational cost of the problem by N*N. So if you increase the number of particles by a factor of 10, it takes 10*10 as much computation. If the problem were N^3 difficult, you’d take 10*10*10 times as much computation.

That’s polynomial time. Exponential time is harder. For exponential time problems, you raise some constant to a power determined by the size of your system. That scales far worse. To come back to the Schrödinger Equation, the base equation we want to solve if we want to model the behavior of physical systems at the molecular scale, consider this. If you want to solve the equation for a system the size of a small protein, perhaps a few hundred amino acids long, you’ll have to wait literally millennia for Moore’s law to get you a computer fast enough.

Of course, no one wants to wait that long, so computational chemists and computational biologists have devised approximations that sacrifice accuracy but can get results far faster. The gold standard of these techniques, a method called CCSD(T), scales with the number of electrons in the system to the 7th power. That means, for instance, that if you double the number of electrons in the system you want to study, you increase the amount of computing you need by a mere factor of 128. Want to simulate helium? No sweat! Just grab 128 times as many cores as you used for hydrogen. How about a single carbon atom, with 6 electrons? Oh, you only need 279,936 times as much computation for that as you did for hydrogen. No problem, that’s just 28 or so years of Moore’s Law.

How about a small molecule drug with a couple dozen carbon atoms and 200 electrons in total? Mmm, just add 12.8 quadrillion times as much computing as you had for hydrogen, and you’ll get there. Just 80ish years of Moore’s Law.

Of course, to some extent I’m exaggerating for effect. There are approaches that work in N^4 time, or N^3, or even N^2 (though they get progressively less accurate over time, and start to have limitations like, say, not being able to simulate molecules in water, or in the presence of an electric field, or other little things like that).

The point is that when we think about Moore’s Law giving us exponentially increasing computational power, that does not mean exponentially increasing results. In some domains the results will fly along. In others they’ll creep. And some of the most important domains – those that relate to the design of new materials to create new computing substrates, for instance, or those that relate to the internal structure of our cells and ways to modify them to, say, reverse aging – are in the camp of computationally hard problems that are more likely to creep along than fly.

Second, the material world exerts drag on exponential processes. Let’s look at Moore’s Law itself as an example of this.

Moore’s Law doesn’t just happen by magic. It happens because companies like Intel and AMD spend billions on R&D researching new chip fabrication methods, and then billions more constructing facilities where these methods can be used to actually make chips.

For example, Intel just announced major investments to upgrade their fabs (the facilities where chips are made) to a new 22 nanometer process. That means they’re installing new equipment that etches lines a mere 22 nanometers thick on silicon wafers. That’s just a few 10s of silicon atoms across. It’s incredibly impressive, but more on that later.

In the announcement, Intel says that this will create 6,000-8,000 construction jobs, and that the facilities will be online with this new process in 2013. That means, 6,000-8,000 workers, presumably augmented by powerful construction equipment, working for 2-3 years to bring this online.

Imagine that you are a superintelligent AI running on some sort of microprocessor (or perhaps, millions of such microprocessors). In an instant, you come up with a design for an even faster, more powerful microprocessor you can run on. Now…drat! You have to actually manufacture those microprocessors. And those fabs take tremendous energy, they take the input of materials imported from all around the world, they take highly controlled internal environments which require airlocks, filters, and all sorts of specialized equipment to maintain, and so on. All of this takes time and energy to acquire, transport, integrate, build housing for, build power plants for, test, and manufacture. The real world has gotten in the way of your upward spiral of self-transcendence.

Perhaps AIs would be better at this sort of thing? Presumably with more intelligence, one could be more efficient in managing these sorts of complex projects. Software-controlled worker robots could run 24×7. Streamlined logistics could make sure work is never halted by parts shortages or poorly sequenced tasks. Even so, there are limits. The material world, where improvement in productivity and efficiency is more like 1% per year, instead of the 40% of the digital world, puts bounds on how fast you can do things. Computation requires matter and energy.

Third, in many areas computing power is not the limiting factor in progress. One of those areas is Artificial Intelligence. AI labs are not the hungriest consumers of computing power. Supercomputers are used much more for protein folding, nuclear weapons simulations, weather simulations, earthquake simulations, and other assessments of the material world than they are for artificial intelligence purposes.

Why? Because, in the words of at least one prominent AI researcher, CPU isn’t the bounding factor. It’s software.

I have no doubt that we’ll someday have human-level artificial intelligences. That will require advances in both hardware and in software. To pin our estimates of when we’ll have such AI’s on the moment in time when our hardware will give us as much computing power as we believe the brain has (a number much in dispute) is to ignore the more difficult side of the problem.

Of course, uploading our brains into computers could short-circuit the software problem. The signal propagation in our brains seems to work quite well to produce intelligence. So let’s just upload the structure of our brains. I think we will. It’s only a matter of time. But currently the only mechanism we have which could read the information from your brain in order to upload it is to fill your brain with plastic, slice it into pieces about as thick as the lines in Intel’s new chip fabrication process, and then use a scanning electron microscope (or more likely, an array of thousands or tens of thousands of such devices) to turn that into a digital representation of your brain.

That, I’m sad to say, is likely to be fatal to the original meat body. Nor is there any non-invasive technology on the horizon that promises the resolution needed to upload without slicing up your brain. I personally have few philosophical qualms with such an operation, but I want to be quite sure the technology works before I volunteer for it. Progress in this area is going to be plagued with ethical issues (what if you try to upload someone and they end up deranged? Or retarded?), and whatever you may think of ethics review boards, their involvement will almost certainly slow down any efforts to upload a human being.

Fourth, biology and physics impose some fundamental limits on certain types of progress. Most relevant to singularity theorists is the potential limit of Moore’s law. Gordon Moore never actually said that computers would double in computing power every 18 months. What he said was that the number of integrated circuits you could place in the same area would double every 18 months (at the time Intel primarily made memory chips, not processors). That increase in circuit density has been driven by increasingly fine chip fabrication processes. As the lines we have drawn have gotten smaller and smaller, we’ve been able to draw more of them on a chip. The etching of smaller and smaller lines will ultimately run out, though. A silicon atom is roughly a quarter of a nanometer across. It’s impossible to etch lines on silicon wafers smaller than this without the lines bleeding into one another. Long before that line width, circuits run into the problems of quantum tunneling, where electrons tunnel from one circuit to a nearby one. Intel believes this will happen sometime shortly after 16 nanometer lines, not far beyond the 22 nanometer lines they’ll be etching in 2013.

Of course, the end of gains from etching smaller lines doesn’t necessarily spell the end of price/performance gains in computing. As Ray Kurzweil has pointed out, before integrated circuits there were transistors and vacuum tubes, and computing power rose exponentially through both of those technologies. It’s possible that optical computing, or quantum computing, or stacked or layered integrated circuits will provide future exponential gains in computing per dollar.

It’s possible, but it’s not guaranteed. It is virtually guaranteed that consumers and industry will demand more computing, and so larger chips with more cores or stacked 3D chips will come out to satisfy those. But they may not be substantially cheaper per unit of computation, as the process that manufactures them will have topped out.

As for optical or quantum computing, both are in their infancy. Neither shows signs of being ready to pick up where integrated circuits are likely to leave off by 2020 or so, suggesting at least a temporary plateau. And quantum computing shows large advantages only for certain subsets of problems, not for general purpose computing. Nothing says that optical or quantum computing will not eventually resume and continue the exponential increase in computing, but nothing guarantees that they will, either.

We’ve become so accustomed to Moore’s Law that it’s worth remembering that the computing domain is an outlier from all other domains. We don’t see fast exponential increases in material strength or strength-to-weight, in engine performance, in aircraft speed or efficiency, in the cost to build buildings or sew clothes. Computing has had one for the last few decades because we’ve been able to process more information by drawing smaller and smaller lines. Once we’ve gone as small as we can go, we are no longer assured of the exponential gains we’re so used to.

  1. Neither Present Nor Future Are Distributed Evenly

The last reason I’ll present for why ‘The Singularity’ is a misnomer is a line borrowed from William Gibson: “The future is already here – it’s just not evenly distributed.”

Indeed, neither future nor present are evenly distributed, nor should we expect them to become so.

I’ve heard a number of friends and acquaintances over the years talk about when the singularity occurs and ‘we’ upload. We? Who’s this we? A subset of singularity believers hold that there’s a future socialist utopia on the other side of transcendence, where all humanity will be able to reap the benefits of this exponential flowering of technology.

Well, maybe. But if so, it will be because humans (or our descendants) intentionally restructure the world economy. It won’t happen as a result of advances in science and technology on their own. The existence of a technology does not guarantee that you, or I, or any particular person has access to that technology. I would personally love to have an IBM Blue Gene/P supercomputer. (Perhaps I could upload my friend’s cat.) Unfortunately, I don’t. The cost is a bit out of my budget. The first upload is unlikely to be cheap. If computing price / performance continues to improve exponentially it will eventually become cheap, but that’s not a given. Either way, you will somehow need to find a way to pay for that hardware, the power use on it, the cooling, the bandwidth, the backup service (really, who wants their only upload to get wiped?), maintenance and wear and tear on the parts, etc.

How will you do this? Well, perhaps your upload will take a job working at Intel, on one of those teams working to create faster chips…

Of course, if you’re not already one of the world’s best chip designers, you may have a problem there, because the world’s best chip designer, obviously in higher demand than you, might just make a copy of himself. Or two copies. Or ten copies. Or a hundred copies. After all, they’re all better at that job than you, so they can all earn a better wage than you can.

Well, maybe you’ll do software instead. Same problem. The very best, the ones who can charge the highest wages, can make more and more copies of themselves, each of which is in more demand than you.

In fact, this problem exists in every kind of knowledge work. It was first pointed out by economist Robin Hanson in his paper If Uploads Come First. Presumably, of course, employers may like some variety in their work force, so that may help you out. But maybe not enough.

Indeed, you may face a worse problem than you think. Because uploads can consume any amount of computing power by creating new copies of themselves, or by running faster, the total demand for computing may be very very high. And that, in turn, would drive the prices up for you.

The meta-point here is this: an upload world, or a world with AIs, or a world with indefinite lifespans, won’t eliminate competition. At any given point, any of these future worlds will have finite resources, and that will mean economics of some sort. If anything, the world of AIs and uploads is likely to be more hypercompetitive than the physical world we occupy now, as AIs and uploads can consume an effectively unlimited amount of computation. The bounds on demand for computation will disappear, while supply will still at all times be finite.

The other side of this story is that we already live in a post-scarcity world. Worldwide, the number of calories available in the food supply per person on earth is at an all time high. Yet we haven’t totally eliminated hunger or malnutrition. As a percentage of humanity, the number of hungry people is at an all time low, but as an absolute number, it’s still over 800 million people – staggeringly high.

Post-scarcity worlds aren’t just about the ability to produce. We have the capability to produce more than enough on the planet to comfortably feed, house, transport, and educate every person on earth, but we don’t. I’m no pessimist here – the average welfare of people on earth has risen staggeringly in the last few centuries, and the last three decades have lifted 2 billion people out of poverty. Even Sub-Saharan Africa seems to be making forward progress in human welfare in the last decade. But the world is still very different in rural Kenya than it is in Silicon Valley.

What makes you think that technological progress alone is going to fundamentally change that?

——-

If the essay above sounds pessimistic, don’t take it as such. I remain optimistic about progress. Life expectancies around the world continue to rise at a quarter year per year that passes. More people are well fed, educated, and able to live happy lives and contribute to society than ever before. Warfare, if not vanquished, is far less of a blight on humanity than in any previous century. And the future does look bright. The convergence of biology with information technology holds out the hope for tremendous gains in human health and longevity.

I’m no doomsayer. I expect human welfare, human collective intelligence, and human collective capabilities to be substantially higher in 2050 than they were in 2000, and even higher yet in 2100 than in 2050. By 2100 we’ll have life expectancies of at least 100 years in developed nations, and close to that in developing ones, and perhaps (fingers crossed) far more. We’ll have seamless and intuitive access to each other, information, entertainment, and to interactive experiences far beyond anything we see today. We’ll be smarter than we are today, both because of the information access and communication tools we have, and because of things we’ve learned about our brains and how to increase our learning rate, memory capacity, pattern recognition, and attention by manipulating our brains. We’ll have more ability to choose who we are – altering our minds and bodies to choose personality traits, appearances, and mental and physical capabilities in ways we can only speculate on today.

Perhaps some of us will even have become digital, or we will have created digital intelligences, or both. If so, those entities will wrestle with whole new issues of individuality and identity when they can create new instances of themselves, and as they learn to integrate those instances into composite intelligences.

Those digital entities, if they do arrive before 2100, will coexist with a large number of physical flesh-and-blood intelligences.

None of this, to me, sounds like a divide by zero point. This doesn’t sound like a “Singularity”. It sounds like progress, and the start of another stage in evolution. But human hopes, desires, conflicts, and limitations (some existing ones, plenty of new ones) will continue to exist.

 

Ramez Naam is the author of More Than Human: Embracing the Promise of Biological Enhancement and the winner of the 2005 HG Wells Award for Contributions to Transhumanism.

 

See Also

The Smarter-Than-Human AIs Won’t Rule the World but More-Than-Humans Will

The Reluctant Transhumanist

Singularity 101 with Vernor Vinge

 

15 Comments

  1. Your comments about us having greater than human intelligence (GTHI) already are waaay off the mark. What you did was redefine the term so you could construe the present situation to be the way you wanted it.

    You say “These super-human intelligences are the distributed intelligences formed of humans, collaborating with one another …”. I don’t think so.

    The meaning for GTHI is something fully cognizant of its own intelligence, able to do all the things that a human does when it thinks, and more. There is not a single distributed intelligence on the planet that can, say, learn how to plan a trip to the store to get family groceries for a week. There is no GTHI that, after learning to do that, could generalize the plan so it worked in a different country where they do not have ordinary grocery stores. When it wasn’t shopping, it would also be able to pick up a book on calculus and learn it. And never mind a GTHI that can do all of that stuff quicker than a human. This is not even remotely possible today.

    No, the real meaning of GTHI goes like this. Imagine a computer that can do all the things that a smart human can do, including all of the autonomous learning and skill acquisition abilities. Then imagine that it could do these things about a thousand times faster than us. It does not have to be a thousand times *smarter* (whatever that means), just 1000x smarter. Now, such a creature would start on Monday morning with no knowledge of physics: then, using a specially designed laboratory that allowed it to move rapidly, it would start learning physics, mathematics, etc. By Tuesday morning, assuming it worked only as fast as Einstein (times 1000), it would have invented Special Relativity. One day in the life of such a machine would be equivalent to all of Einstein’s early career.

    THAT is greater than human intelligence. Compared to that, we currently have nothing more than just us chickens. Your claim is, sadly, spurious.

    Richard

  2. I find this gernally quite persuasive, but the stuff about ‘one person uploading multiple copies of themself’ is patent nonsense. When mind starts appearing on a vessel the scale of the internet, it’s not just going to carry on like our self-contained fleshy brains do. It’s going to realise that the distinction between one mind and multiple minds is nebulous, and the requirement for essential characteristics to define oneself even more so, and start expanding to fill the available computing space. This sounds more hostile than it necessarily is, only because we’re still trapped by our bodies into feeling that there’s something fundamental distinguishing one person from the next, so that ‘one’ mind ensconcing ‘another’ harms the other.

    Even if the mind(s) online don’t feel this way (and if they don’t, some amount of their neighbour will), by the time we’re clever enough to transfer whole personalities to artificial hardware, we’ll also be able to isolate many of the components that constitute them. So why will we bother uploading an existing mind, warts and all, when we can remove the bad bits and splice in some Einsteiny bits, then fiddle around with it later (as we do with existing computer programs) to continually improve it?

    Also, why ‘uploading’? The process is going to be like (selectively) copying and pasting data, not magically teleporting conscions from one place to another. The original will continue to exist (unless actively destroyed), and will continue to have all its old aspirations and fears as long as it does so.

    It’s all very well (and probably logical) to say as Naam does that he has no philosophical qualms with being deleted in the process, but if you were to wake the biological him up after a successful paste job and ask if he minds committing suicide to save on resources, he’d be unlikely to be so sanguine.

    As long as we’re retaining our quaint pre-upload personal identity-related morality, this raises the question of whose preferences we should consider – pre-op or post-op Naam’s.

    • Brilliant comments on a brilliant critique both tainted with a smorgasbord of dubious assumptions, specious reasoning, and unfounded opinions along with the good ones. The main problems seem to come of partial misunderstandings based on incomplete data, insufficient perception and resultant misconception about limitations, potentials, and theories. I wish I had time to address all the points here & now, but I’ll have to do it in a forthcoming book. Keep up the good work & re-examine all the basics!

  3. I think the author made some good points.

    In particular he made a point that I myself have tried to beat into the heads of non-responsive Transhumanist, to wit: The future will be first come first served.

    Anyone that thinks there is going to be some sort of happy-happy, joy-joy, equality-of-man, sharing of resources is going to be very surprised.

    There is in essence, a race on right now. Those who gain the power of this technology first will be unlikely to share it with the rest of us, or if they do, it will be on their own terms.

    Ken
    http://www.kenstech.com

  4. I thought #3 was barely answered. You skipped over the fact that exponential curves start off flat and appear to be making no progress for quite some time until the growth explodes at the knee of the curve.

    If we never developed superhuman AI, well, then much of what you argue in the other answers is likely true.

    So I find it very convenient that you more or less skipped over that question.

    I call Bullshit.

Leave a Reply