H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Lisa Rein
October 4, 2009


 

 

 

 

 

 

 

 

 

 



 

 

 

 

At the beginning of his talk, Peter Thiel listed seven world-ending scenarios, and asked the audience to raise their hands to show which one they were most concerned about:

1- Robots killing all the humans
2- Small pox/ebola or some kind of virus killing us
3- Gray Goo
4- Israel bombs Iran and starts WWIII
5- Computers are used in the future to control everyone
6- Runaway global warming killing us
7- The singularity taking too long to happen

Peter told the audience that he's more afraid of the Singularity not happening fast enough more than anything else. At the end of the presentation, an audience member asked him exactly what catastrophic things he thought might take place as a result. It turns out that he's concerned about the technology that won't be invented, and the implications of that for mankind.

Another audience member asked him what he wakes up in the morning worrying about, and he gave the same answer, saying "I'd be more optimistic if other people were more worried."

Are you worried that the Singularity won't happen fast enough? Or do you think we have more pressing issues that we should be focusing on right now? Tell us with your comments.

 

 

9 Comments

    is there no probability that the solution to the fermi paradox is that there is a high probability for civilizations to create black holes and be destroyed?

    i absolutly dont know my physics yet, but couldnt it maybe explain all these black fuckers that are out there?

    I am a lot more worried that the onslaught towards a singularity (which will almost certainly be severe) will be by virtue of something silly like investment/intellectual properties issues reserved to small minorities. Or maybe because of ideology. I can easily see a slow, stuttering ascent to some kind of hard takeoff, and even a very turbulent aftermath, rife with "integration shockwaves" where billions still die of aging and general misery while the water takes time to rise or the manna takes time to rain down.

    Typical, I am precisely of that age that makes this a borderline case of deeply annoying. I might still find myself old and miserable, a few decades from now, having to watch the clock tick by as outside the rich are taking off in their flying cars left and right.

    And considering Theil's political ideals, batshit leftist people like me might suffer years of percolation time we can't afford.

    Until coming across Ray Kurzweil's books, I was convinced that I would live to see humanity's final days. Since having read his work, my biggest worry is the same as Mr. Thiel's. We see the promise ahead, but cannot predict if we will snag or shipwreck on the way.

    The idea of singularity is a polarizing one, eliciting strong emotions from both sides. Wars will be waged and fear will precede legislative mistakes that slow progress, all while the worlds resources are depleted. These things are inetivitable when there are so many of us with conflicting beliefs and goals.

    I hope against hope that we can keep it together long enough to ensure that at least a large portion of humans can transcend to our man-made heaven(s).

    I do not understamd the scientific basis of the predicted singularity but wonder how it might solve conflicts of ideology and relationships. I have explored methods that support development of a science of mememetrics for the domain of ideology/isms and have noted theoretical and empirical convergence on a general theory of relationships. I invite correspondence on these topics.

    Where's the single most virulent and destructive influence in the world, i.e. the meme, mentioned? Approximately 100% of so-called "uncalled for" killing and torture happens because of memes gone wild.

    You call yourself a "batshit leftist" and it seems true enough. Your comments indicate that you believe that you, and everyone else, deserve to be in the singularity even though you contribute nothing to it. If anything will hold back the singularity it is the meme you are infected with because leftists (and rightists) always try to hold back technological progress in the name of some nonsense, like equality or tradition. Your comments of billions dieing and disdain for rich people, in flying cars, are so typical of your type thinking. I fear the mass of humanity are like you and will try to stop the singularity.
    It also shows you have little regard for your fellow human beings. Once the singularity hits, there is no reason why a person who can live forever would not help his fellow human beings. batshit leftist indeed.

    You're stupid.

    It's comments like that which make it obvious that a catastrophically unequal advent of the singularity is a serious possibility. Dildos like Capitalist clearly have a severe deficit when it comes to baseline human traits like empathy and fairness and consequently the rest of us should be deeply concerned in the event that they aquire exponentially increasing capabilities.
    It's unclear exactly who Capitalist thinks has contributed enough to demand a share in the benefits a singularity could bring, but I doubt he's thinking of the people who say, grow the food he eats, pave the roads he drives on, dig the coal that powers his laptop etc etc qualify.
    Perhaps the toxic meme he decries is the self-preservation meme?

    -it's difficult for an entrepreneur to see we're in a tsunami of technology.

    The temptation is to look for specific projects; but weak A.I....& what Vinge has called Intelligence Amplification...is happening all around us.

    It is also, probably, much safer than a unique project build, aimed at accelerating intelligence.

    How do you time when machine intelligence will be so big IT will be making decisions thought exclusively defined by humans? When will machines become truely recursive, and self-modify at increasing speeds?

    I think it is possible to plot that:

    you just have to describe what those systems have to do.

    the answer's always the same:

    machines have to be able to manipulate X amount of data at Y speed.

    the manipulation isn't drastically complicated....it just means combining different datas at fast speeds.

    Computers already do this.

    But not fast enough.

    And not nearly with enough memory.

    The have to be able to do internal experiments, because we aren't going to let them lose in the environment, where we do most of our learning.

    Moore's Law is enough to predict with. If it holds, post human self-modifying machines will be here before 2029.

    The dot com POW was really over 10 years and it looked like it came from nowhere.

    I see all challenges are irrelevant because it's going to happen and if you're dead it will resurrect you (NB there is NO valid argument against this).

    its sort of quaint living out the last era of Man as part of a special elite the ones who know its about to end.

    I wouldn't under estimate that struggle to be alert to what's coming has meant for us: we have got lucky, but we've also slaved at getting enough posits for a paradigm shift inside our heads, and fight to strengthen it every day be chucking out assumed values and hammering in accurate ones despite the crowd.

    It's OK to go fishing and just wait it out!

    I'm going to post this on Ray's MINDX as I seldom express myself

    Cheers, I look forward to the videos of the meet.

    Eldras
    http://sites.google.com/site/johnellisproject/home

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community