h+ Media http://hplusmagazine.com Elevating the Human Condition Fri, 27 Feb 2015 19:24:41 +0000 en-US hourly 1 Video Friday: Synergetic Omni-Solution (SOS) http://hplusmagazine.com/2015/02/27/video-friday-synergetic-omni-solution-sos/ http://hplusmagazine.com/2015/02/27/video-friday-synergetic-omni-solution-sos/#comments Fri, 27 Feb 2015 19:24:41 +0000 http://hplusmagazine.com/?p=26884 You are the Synergetic Omni-Solution (SOS)

The post Video Friday: Synergetic Omni-Solution (SOS) appeared first on h+ Media.

]]>

This is a video introduction to a multidisciplinary initiative based on Buckminster Fuller‘s thesis that “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” the Synergetic Omni-Solution (SOS) invites every individual to participate in this model-building process through simple acts of personal creativity and initiative – our every thought and action, however small, contributes in infinitely to the whole.

We ARE the technology.

According to r. buckminster fuller, “the entire universe is in tension.” if we could see a universe being born, perhaps this is what it would look (and sound) like.

Soundtrack based on one of fm3’s buddha machine‘s loops.

The post Video Friday: Synergetic Omni-Solution (SOS) appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/27/video-friday-synergetic-omni-solution-sos/feed/ 0
Video Friday: Buckminster Fuller 1974 http://hplusmagazine.com/2015/02/27/video-friday-buckminster-fuller-1974/ http://hplusmagazine.com/2015/02/27/video-friday-buckminster-fuller-1974/#comments Fri, 27 Feb 2015 19:21:00 +0000 http://hplusmagazine.com/?p=26883 Buckminster Fuller on new ideas for man to survive, advance, design our world, and move forward.

The post Video Friday: Buckminster Fuller 1974 appeared first on h+ Media.

]]>

Buckminster Fuller on new ideas for man to survive, advance, design our world, and move forward.

Bucky touches on post-scarcity society, strict finitism, exponential growth, and his idea of “livingry” as well as a variety of other topics of interest to transhumanists.  He suggests a post-scarcity global society could be in place by 1995.

It didn’t happen, but many of his ideas about alternative energy from wind and solar power are starting to become real today.

 

The post Video Friday: Buckminster Fuller 1974 appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/27/video-friday-buckminster-fuller-1974/feed/ 0
Democracy+: Beyond Majoritarianism http://hplusmagazine.com/2015/02/26/democracy-beyond-majoritarianism/ http://hplusmagazine.com/2015/02/26/democracy-beyond-majoritarianism/#comments Thu, 26 Feb 2015 23:27:21 +0000 http://hplusmagazine.com/?p=26861 Democratic decisions need not be resolved by (simple or weighted) majority vote.

The post Democracy+: Beyond Majoritarianism appeared first on h+ Media.

]]>
vote

Majoritarianism is Bad

“Democracy works on the basis of a decision by the majority,” they say.  Is that really the best we can do?

Take Donetsk.  One guy wants (a) unity with Russia; others prefer (b) independence, (c) more autonomy within Ukraine, or (d) the status quo.  So, what does this guy do?  Simple: he concocts a catch-all phrase to unite the (a), (b) and (c) supporters, something about self-determination, and holds a referendum.  The (d) supporters abstain; he wins; and two hours later, he announces a policy of (a).

Or take Scotland.  In 1997, Tony Blair wanted the Scots (and Welsh) to want devolution. The SNP (and Plaid Cymru) argued for multi-option votes to include independence, but Blair said no. Devolution won by 48% (and in Wales, by 1%!) The SNP now controls the question, so it’s back to majority voting.

Or take any majority vote. The obvious flaw of this blunt, divisive and adversarial instrument is this: you cannot thereby identify a majority opinion, because, to be on the ballot paper, that opinion must be identified earlier. You can ratify a majority opinion, perhaps, if you have consulted widely or guessed wisely. But even then, you cannot be sure.

In contrast, you can probably thus identify, with absolute certainty, the opinion of he/she who wrote the question. Which is why, in referendums, parliamentary divisions or party meetings, majority voting has been used by umpteen dictators; they include Napoleon, Lenin, Mussolini, Hitler, Gaddafi, Duvalier and Khomeini. Some of them changed their party and/or electoral system; none adjusted the majority vote.  He – it was always a he – chose the question, and the question was the answer. It works, always, almost. It backfired twice: Pinochet lost his third referendum, and Mugabe lost his one and only which he ruled to be non-binding.

Majoritarianism was also the underlying doctrine of both Stalin and Máo Zédōng.  Indeed, on translation into the Russian, the very word ‘majoritarianism’ comes out as ‘bolshevism’… (oops, so they have now concocted a new word: ‘majoritarnost’.)

However, democratic decisions need not be resolved by (simple or weighted) majority vote. If there are more than two options on the table, there are several other decision-making voting systems (and even more electoral systems, for the latter sometimes cater for more than one winner). In decision-making, then, the outcome could be the option with the most first preferences, or the fewest bottom preferences, or the best average; furthermore, there could be quotas, thresholds and weightings, with two or more rounds of voting. There are lots of possible systems.

 

Two Alternative Systems

220px-Preferential_ballot.svg

Condorcet Ballot

Only two of the alternatives take all preferences cast by all voters into account: the Borda Count, a points system; and the Condorcet rule, a comparison of every pair of options, to see which wins the most pairings. Little wonder that the Borda and Condorcet rules are the most accurate. Indeed, the Borda winner is often the same as the Condorcet winner.

In both systems, people cast their preferences. Then, in the non-majoritarian Borda Count, the outcome is, at best, the option with the highest average preference. And an average involves everybody, not just a majority.

A form of Borda Count is used in elections in Slovenia and in Nauru. For the first time ever, (as far as is known), it has now been used in decision-making. On 20 May, 2014, Dublin City Council opened a new bridge across the Liffey. A ‘Naming Committee’ of six councillors used a Borda Count to get a short list of five names; and on this short list, a full meeting of Council used another Borda count to identify their consensus opinion: Rosie Hackett.

Borda Ballot

In a plural democracy, on any contentious question, there should always be more than two options ‘on the table’. If a democratically elected chamber takes decisions by a non-majoritarian methodology, there is no longer any justification for majoritarianism: majority rule by majority vote; neither single-party majority rule nor majority coalition nor even grand coalition.

Consider a consensual polity. One party moves a motion. Other parties may propose, not amendments to this clause or that, but a complete (even if similar) package. If, when the debate ends, a verbal consensus proves to be elusive, all concerned move to a vote.

No one votes no.

No one votes against anybody or any thing. Instead, everyone votes for one, some or hopefully all the options listed, albeit with varying degrees of enthusiasm. In a nutshell, the Modified Borda Count – that’s its full name – can cater for a more inclusive polity; it is ideally suited for power-sharing, for all-party coalition governments of national unity, and for international gatherings. It is more accurate; ergo, it is more democratic.

###

Peter Emerson is the Director of the de Borda Institute. His participation in the Northern Ireland Peace Process prompted him to join CND. His latest book is Defining Democracy, Springer, 2012.

Creative Commons License

This article is published under a Creative Commons Attribution-NonCommercial 3.0 licence and originally appeared here.

The post Democracy+: Beyond Majoritarianism appeared first on h+ Media.

]]> http://hplusmagazine.com/2015/02/26/democracy-beyond-majoritarianism/feed/ 0 The LHC is back and it’s ready to probe the limits of matter http://hplusmagazine.com/2015/02/26/the-lhc-is-back-and-its-ready-to-probe-the-limits-of-matter/ http://hplusmagazine.com/2015/02/26/the-lhc-is-back-and-its-ready-to-probe-the-limits-of-matter/#comments Thu, 26 Feb 2015 20:26:04 +0000 http://hplusmagazine.com/?p=26849 Since shutting down in early 2013, the most powerful particle accelerator on the planet, the Large Hadron Collider (LHC), has been sitting dormant.

The post The LHC is back and it’s ready to probe the limits of matter appeared first on h+ Media.

]]>

A 3D artist has dissected the LHC in this composite image, showing a cut-out section of a superconducting dipole magnet. The beam pipes are represented as clear tubes, with counter-rotating proton beams shown in red and blue. Daniel Dominguez/CERN

Since shutting down in early 2013, the most powerful particle accelerator on the planet, the Large Hadron Collider (LHC), has been sitting dormant. Over the past two years this scientific colossus situated at CERN near Geneva, Switzerland, has undergone a series of repairs and upgrades. But now it is ready to reawaken from it’s slumber.

This new era will see a collider with almost double the previous energy, with collisions at 13 TeV. Scaled up into our macroscopic world, the force of these collisions between protons is roughly equivalent to an apple hitting the moon hard enough to create a crater more than 9.5km (6 miles) across.

This new energy frontier will allow researchers to probe beyond the current boundaries of our understanding of the fundamental structure of matter in search of new discoveries.

Detector upgrades

In order to make the most of the new accelerator conditions, the discovery experiments, ATLAS and CMS, have undergone further upgrades during the shutdown period.

Most notably the ATLAS experiment has added an entirely new detector, the Insertable b-Layer, or IBL. This sits very close to the point where the protons slam into each other, creating a cascade of other subatomic particles.

A visualisation of particles colliding in the ATLAS detector back in 2012. New experiments will be run at a higher energy and may yield even more startling results. ATLAS team/CERN

Because the IBL sits closer to the action than the original detectors – which are also still in use – it provides an additional measurement point for particles originating from the collisions, allowing greater accuracy on the resulting measurements.

The IBL will be especially important for identifying heavy particles, such as bottom quarks, which are produced during decays of short-lived particles such as the Higgs boson and are crucial for measurements of the top quark (which decays to a bottom quark and W boson).

Beyond the Higgs boson

During the first run of the LHC in 2012, the ATLAS and CMS experiments ended the 50 year hunt for the Higgs boson, which was predicted by the Standard Model –- a theory governing all particles, forces and interactions.

Having measured the mass of the Higgs boson by looking at the way it decays into other particles, LHC scientists then went one step further. In 2013 they measured the properties of the boson, all of which proved consistent with the predictions of the Standard Model.

Now physicists want to know if the Higgs they found is hiding any surprises. And, perhaps more importantly, what may be lurking beyond it. The increase in LHC energy is coupled with an increase in luminosity, which allows physicists to probe rare events with greater frequency.

This high luminosity in concert with the increase in energy provides an unprecedented environment to interrogate fundamental physics beyond the limits of our current knowledge. The first thing to do with the new data is to study the Higgs boson in depth to see if anything disagrees with prediction.

This could be a window into new physics. Because the Higgs boson loves mass, scientists suspect that it might interact with a range of hidden, massive particles that we cannot see, such as potential candidates for dark matter.

If the Higgs boson is partying with as yet undiscovered particles, physicists hope that their newly improved particle collider and upgraded detector instruments will allow them to crash the party -– and find out something about the attendees!

Candidate Higgs boson events from collisions between protons in the LHC. The top event in the CMS experiment shows a decay into two photons (dashed yellow lines and green towers). The lower event in the ATLAS experiment shows a decay into four muons (red tracks). ATLAS and CMS, Collaborations

Supersymmetry, dark matter and other exotica

Even if the Higgs boson were to continue to agree with the Standard Model predictions, the value of its mass is still suggestive of other interesting goings-on in the universe.

When LHC physicists measured the Higgs mass, they found it was lower than what they anticipated. This might make sense if it was being caused – or protected – by one or more particles that exist at a higher mass and were governed by some new “symmetry”.

Supersymmetry is one such extension of the Standard Model that would yield additional partners of the known objects that may appear in high-energy LHC collisions.

These particles could act as “bodyguards” of the Higgs, influencing its measured mass. These supersymmetric particles could potentially be produced in the next run of the LHC, perhaps even as early as this year.

One natural consequence of certain supersymmetric models is the production of invisible stable massive particles that are weakly interacting. Such a particle would be an excellent candidate for dark matter, the mysterious invisible matter that we have thus far only detected via its gravitational effect.

Providing clues as to the nature of dark matter is one of the main motivators of the increased energy and intensity of the LHC collisions. Any evidence of dark matter and/or results consistent with supersymmetry would be hugely significant and would open up a new chapter in our understanding of the universe at a fundamental level.

But the experiments must be prepared for any possible signature to be manifested in their collisions, and subsequently mine the data for evidence of exotic resonant structures, extra dimensions or long-lived particles among many other possibilities.

So 2015 promises to be a once in a lifetime opportunity for a generation of physicists who will turn on and commission a machine at unprecedented energies. With new discoveries potentially just around the corner this may well be a defining time in the field of high energy particle physics.

###

Paul Jackson leads the experimental particle physics group at the University of Adelaide. He is a member of the ATLAS experiment, one of two multi-purpose discovery detectors situated at the CERN Large Hadron Collider, the highest energy particle collider in the world. Previously, he was a member of the team that discovered the Higgs Boson in 2012 and work with colleagues nationally through involvement in the Australia Research Council “Centre of Excellence for Particle Physics at the Terascale”.

This article previously appeared here, republished under creative commons license.

The post The LHC is back and it’s ready to probe the limits of matter appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/26/the-lhc-is-back-and-its-ready-to-probe-the-limits-of-matter/feed/ 0
AI Masters Classic Video Games Without Being Told the Rules http://hplusmagazine.com/2015/02/26/ai-masters-classic-video-games-without-being-told-the-rules/ http://hplusmagazine.com/2015/02/26/ai-masters-classic-video-games-without-being-told-the-rules/#comments Thu, 26 Feb 2015 20:21:30 +0000 http://hplusmagazine.com/?p=26853 In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.

The post AI Masters Classic Video Games Without Being Told the Rules appeared first on h+ Media.

]]>
Imagine a machine that can learn things from scratch, no pre-programmed rules. What could it do? Flickr/Marco Abis, CC BY-NC-ND

Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.

In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.

What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.

It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.

But by playing lots and lots of games many times over, the computer learnt first how to play, and then how to play well.

A machine that learns from scratch

This is the latest in a series of breakthroughs in deep learning, one of the hottest topics today in artificial intelligence (AI).

Actually, DeepMind isn’t the first such success at playing games. Twenty years ago a computer program known as TD-Gammon learnt to play backgammon at a super-human level also using a neural network.

But TD-Gammon never did so well at similar games such as chess, Go or checkers (draughts).

In a few years time, though, you’re likely to see such deep learning in your Google search results. Early last year, inspired by results like these, Google bought DeepMind for a reported UK£500 million.

Many other technology companies are spending big in this space.

Baidu, the “Chinese Google”, set up the Institute of Deep Learning and hired experts such as Stanford University professor Andrew Ng.

Facebook has set up its Artificial Intelligence Research Lab which is led by another deep learning expert, Yann LeCun.

And more recently Twitter acquired Madbits, another deep learning startup.

What is the secret sauce behind deep learning?

Geoffrey Hinton is one of the pioneers in this area, and is another recent Google hire. In an inspiring keynote talk at last month’s annual meeting of the Association for the Advancement of Artificial Intelligence, he outlined three main reasons for these recent breakthroughs

First, lots of Central Processing Units (CPUs). These are not the sort of neural networks you can train at home. It takes thousands of CPUs to train the many layers of these networks. This requires some serious computing power.

In fact, a lot of progress is being made using the raw horse power of Graphics Processing Units (GPUs), the super fast chips that power graphics engines in the very same arcade games.

Second, lots of data. The deep neural network plays the arcade game millions of times.

Third, a couple of nifty tricks for speeding up the learning such as training a collection of networks rather than a single one. Think the wisdom of crowds.

What will deep learning be good for?

Despite all the excitement though about deep learning technologies there are some limitations over what it can do.

DeepMind co-founder Demis Hassaabis on the potential of Artificial intelligence to solve some of biggest problems that humanity faces.

Deep learning appears to be good for low level tasks that we do without much thinking. Recognising a cat in a picture, understanding some speech on the phone or playing an arcade game like an expert.

These are all tasks we have “compiled” down into our own marvellous neural networks.

Cutting through the hype, it’s much less clear if deep learning will be so good at high level reasoning. This includes proving difficult mathematical theorems, optimising a complex supply chain or scheduling all the planes in an airline.

Where next for deep learning?

Deep learning is sure to turn up in a browser or smartphone near you before too long. We will see products such as a super smart Siri that simplifies your life by predicting your next desire.

But I suspect there will eventually be a deep learning backlash in a few years time when we run into the limitations of this technology. Especially if more deep learning startups sell for hundreds of millions of dollars. It will be hard to meet the expectations that all these dollars entail.

Nevertheless, deep learning looks set to be another piece of the AI jigsaw. Putting these and other pieces together will see much of what we humans do replicated by computers.

If you want to hear more about the future of AI, I invite you to the Next Big Thing Summitin Melbourne on April 21, 2015. This is part of the two-day CONNECT conference taking place in the Victorian capital.

Along with AI experts such as Sebastian Thrun and Rodney Brooks, I will be trying to predict where all of this is taking us.

And if you’re feeling nostaglic and want to try your hand out at one of these games, go to Google Images and search for “atari breakout” (or follow this link). You’ll get a browser version of the Atari classic to play.

A web browser version of Atari’s breakout found in Google images search. Google Images

And once you’re an expert at Breakout, you might want to head to Atari’s arcade website.

###

Toby Walsh is an expert in the study of Artificial Intelligence. He is a Research Leader at NICTA in the Optimisation Research Group where he leads the Algorithmic Decision Theory project. NICTA is Australia’s Centre of Excellence for ICT Research. He is also an Adjunct Professor at UNSW. He has been Editor-in-Chief of two of the main journals in AI: the Journal of Artificial Intelligence Research, and AI Communications. He is currently Associate Editor of one of the leading journals in computer science, the Journal of the ACM covering the area of Artificial Intelligence.

This article originally appeared here, republished under creative commons license.

The post AI Masters Classic Video Games Without Being Told the Rules appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/26/ai-masters-classic-video-games-without-being-told-the-rules/feed/ 0
Interview: Lee Smolin and the Status of Modern Physics http://hplusmagazine.com/2015/02/26/interview-lee-smolin-and-the-status-of-modern-physics/ http://hplusmagazine.com/2015/02/26/interview-lee-smolin-and-the-status-of-modern-physics/#comments Thu, 26 Feb 2015 19:42:22 +0000 http://hplusmagazine.com/?p=26845 About a year and a half ago I published an in-depth critique of Lee Smolin’s Time Reborn. What follows is a Q&A with Lee Smolin, following the earlier post.

The post Interview: Lee Smolin and the Status of Modern Physics appeared first on h+ Media.

]]>
lee smolinI write a science and philosophy blog called Adams’ Opticks [1], and about a year and a half ago I published an in-depth critique of Lee Smolin’s Time Reborn: From the Crisis in Physics to the Future of the Universe, a radical reappraisal of the role of “the present moment” in physics [2,3]. My article was certainly critical of the book, but also something of a labor of love, and I’m completely thrilled to say that Lee has now read the piece and would like to respond. What follows is a Q&A, with most of the questions derived from the earlier post [4].

Adam’s Opticks: Hi Lee, central to your thesis as outlined in Time Reborn and in its recent follow-up The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy (co-authored with Roberto Mangabeira Unger) [5], is a rejection of the “block universe” interpretation of physics in which timeless laws of nature dictate the history of the universe from beginning to end.
Instead, you argue, all that exists is “the present moment” (which is one of a flow of moments). As such, the regularities we observe in nature must emerge from the present state of the universe as opposed to following a mysterious set of laws that exist “out there.” If this is true, you also foresee the possibility that regularities in nature may be open to forms of change and evolution.My first question is this: Does it make sense to claim that “the present moment is all that exists” if one has to qualify that statement by saying that there is also a “flow of moments?” Does the idea of a flow of time not return us to the block universe? Or at the very least to the idea that the present moment represents the frontier of an ever “growing” or “evolving” block as the cosmologist George Ellis might say?
 

Lee Smolin: Part of our view is that an aspect of moments, or events, is that they are generative of other moments. A moment is not a static thing, it is an aspect of a process (or visa versa) which generates new moments. The activity of time is a process by which present events bring forth or give rise to the next events.

I studied this idea together with Marina Cortes. We developed a mathematical model of causation from a thick present which we called energetic causal sets [6]. Our thought is that each moment or event may be a parent of future events. A present moment is one that has not yet exhausted or spent its capability to parent new events. There is a thick present of such events. Past events were those that exhausted their potential and so are no longer involved in the process of producing new events, they play no further role and therefore there is no reason to regard them as still existing. (So no to Ellis’s growing block universe.)

AO: Can you help me understand what you mean by a “thick present”? I’m confused because if the present moment is “thick” rather than instantaneous, and may contain events, it seems like you’re defining the present moment as a stretch of time, which looks like a contradiction in terms. Similarly, when you say that the activity of time is a process I’m left thinking that events, activities and processes are all already temporal notions, and so to account for time in those terms seems circular.

LS: I can appreciate your confusion but look, think about it this way: the world is complex. What ever it is, it contains many elements in a complicated network of relations. To say what exists is events in the present does not mean it is one thing. The present is not one simple thing, it is the whole world, therefore it contains a vast complexity and plurality. Of what? Of processes, which are dual to events.

AO: One of your main objections to the idea of eternal laws comes in the form of what you diagnose as the “Cosmological Fallacy” in physics. Your argument runs that the regularities we identify in small subsystems of the universe — laboratories mainly! — ought never to be scaled up to apply to the universe as a whole. You point out that in general we gain confidence in scientific hypotheses by running experiments again and again, and define our laws in terms of what stays the same over the course of many repetitions. But this is obviously impossible at a cosmological scale because the universe only happens once.

But what’s wrong with the idea of cautiously extrapolating from the laws we derive in the lab, and treating them as working hypotheses at the cosmological scale? If they fit the facts and find logical coherence with other parts of physics then great… if not, then they’re falsified and we can move on. As an avowed Popperian yourself, are you not committed to the idea that this is how science works?

In addition, wouldn’t the very idea of “laws that evolve and change” make science impossible? How could we ever confirm or falsify a hypothesis if, at the back of our minds, we always had to contend with the possibility that nature might be changing up on us? Don’t we achieve as much by postulating fixed laws and revising them on the basis of evidence as we might by speculating about evolving laws that would be impossible to confirm or falsify?

LS: To be clear: the Cosmological Fallacy is to scale up the methodology or paradigm of explanation, not the regularities.

Nevertheless, there are several problems with extrapolating the laws that govern small subsystems to the universe as a whole. They are discussed in great detail in the books, but in brief:

  1. Those laws require initial conditions. Normally we vary the initial conditions to test hypotheses as to the laws. But in cosmology we must test simultaneously hypotheses as to the laws andhypotheses as to the initial conditions. This weakens the adequacy of both tests, and hence weakens the falsifiability of the theory.
  2. There is no possible explanation for the choice of laws, nor for the initial conditions, within the standard framework (which we call the Newtonian paradigm).

Regarding your questions about falsifiability, one way to address them is to study specific hypotheses outlined in the books. Cosmological Natural Selection, for instance, is a hypothesis about how the laws may have changed which implies falsifiable predictions. Take the time to work out how that example works and you will have the answer to your question.

Another way to reconcile evolving laws with falsifiability is by paying attention to large hierarchies of time scales. The evolution of laws can be slow in present conditions, or only occur during extreme conditions which are infrequent. On much shorter time scales and far from extreme conditions, the laws can be assumed to be unchanging.

AO: I’m actually a big fan of Cosmological Natural Selection (which suggests that black holes may give birth to new regions of spacetime, fixing their laws and cosmological constants at the point of inception [7]) — and I can see how that is both falsifiable in itself, and would still allow for falsifiable science on shorter time scales.

Far more radical, however, is your alternative theory which you dub the Principle of Precedence. The suggestion here is that we replace the metaphysical extravagance of universal laws of nature with the more modest notion that “nature repeats itself.” The promise of this idea is that it makes sense of the success of current science whilst leaving open the possibility that truly novel experiments or situations — for which the universe has no precedent — will yield truly novel results.

To my mind, however, this notion begs many more questions than it answers. You claim, for instance, that the Principle of Precedence does away with all needless metaphysics and is itself checkable by experiment. But is it? You suggest setting up quantum experiments of such complexity that they’ve never been done before in the history of the universe and seeing if something truly novel pops out. But how could we ever tell the difference between a spontaneously generated occurrence and one that was always latent in nature and simply unexpected on the basis of our limited knowledge? And once again, as a falsificationist, shouldn’t you count the thwarting of expectations as evidence against individual theories, rather than positive proof of a deeper principle?

LS: My paper on the principle of precedence is a first proposal of a new idea. Of course it raises many questions. Of course there is much work to do. New ideas are always fragile at first.

As to how to tell the difference between a spontaneously generated occurrence and one that was always latent in nature — this is a question for the detailed experimental design. Roughly speaking, the statistics of the fluctuations of the outcomes would be different in the two cases. I fail to see how such an experiment would violate falsificationist principles.

In addition, we believe we know the laws as they apply to complex systems: they are the same laws that apply to elementary particles. To posit new laws which apply only to complex systems, and are not derivative from elementary laws, would be as radical a step as the one I propose.

AO: Can you tell me how the universe is supposed to distinguish between precedented and unprecedented situations? On the face of it, it seems like unprecedented things are happening all the time. You and I have never had this conversation before. Are we establishing a new law of nature right now, and if not, why not?

Another objection: can you tell me where novelty is supposed to come from? If the “present moment” is both the source of all regularity in the universe, and the blank slate upon which formative experiences are recorded — then what could introduce any change? Are you assuming that human consciousness and free will may be sources of genuine novelty?

LS: How nature generates unprecedented events and how precedent may build up are important questions that need to be addressed to develop the idea of precedence in nature. What I published so far is just the beginning of a new idea.

It’s intriguing to speculate about the implications for intentional and free actions on the part of living things. But in my view this is very premature. I am not assuming that consciousness is a source of novelty; I am only making a hypothesis about quantum physics. There is a very long way to go before the implications could be developed for living things.

AO: Nevertheless, it seems readily apparent from your collaborations with the social theorist Roberto Mangabeira Unger, and also the computer scientist Jaron Lanier, that you see many connections between your conception of physics and the prospects of human freedom and human flourishing. It concerns me, however, that in pursuit of a singular — very beautiful — solution to so many problems in science, philosophy, politics and our personal lives, a lot of awkward details may get overlooked.

In philosophy, for instance, you claim to show that the reality of the present moment — conceived in terms of unresolved quantum possibilities — may at last solve the problem of free will. But what of the history of compatibilism in philosophy — from David Hume to Daniel Dennett — that purports to show that our freedom as biological and psychological agents is not only compatible with the regularity of nature, but may in fact depend upon it?

LS: There are certainly common themes and influences in my work and those of Jaron Lanier and Roberto Mangabeira Unger. And I’m happy at times to indulge in some speculation about these influences. But these are very much to be distinguished from the science. The point is that I am happy to do the scientific work I can do now and trust for future generations to develop any implications for how we see ourselves in the universe. There is much serious, hard work to be done, and it will take a long time. Especially given the present confusions of actual science with the science fiction fantasies of many worlds and AI (these two ideas are expressions of the same intellectual pathology).

I agree that we have to build a counter view carefully. I don’t claim to show that my work solves the problem of free will. I suggest there may be possibilities worthy of careful development as we learn more. As for compatibilism, I am unconvinced, but I haven’t yet done the hard work needed to develop the alternative. Dan Dennett is a generous, serious and warmhearted thinker who works hard to produce arguments which are crystal clear. But talking with him or reading him, both of which are great pleasures, I sometimes find that at the climax of one of his beautifully constructed arguments, the clarity fades and there is a step which I can’t follow. I hope someday to have the time to do the hard work to convince myself whether the fault is with his reasoning or my understanding.

AO: Since I have you here, let me try to make the compatibilist objection compelling with three more questions, inspired to a great extent by Dennett’s Freedom Evolves [8]:

  1. If we turn to physics (as opposed to biology or psychology) in search of free will, are we not likely to end up granting as much free will to rocks or tables or washing machines — or indeed computers — as we do to human beings? If we are to be able to change and adapt in response to the problems we face, surely the science of free will must be the science of a human plasticity that outstrips the plasticity of nature more generally?
  2. You claim that the openness of physics may enable us to transcend the fatalism inherent in predictions from climate science, for example: in 2080 the average temperature on earth will be six degrees warmer than it is now. But what of those other predictions stemming from climate science such as: a concerted effort to reduce carbon emissions will avert disaster? If the true nature of physics undermines the certainty of the first prediction, does it not also undermine the certainty of the second?
  3. Setting yourself against a long history of thinkers who would write off the sensation of “now” as a psychological quirk incompatible with timeless physics, you go so far as to call it “the deepest clue we have as to the nature of reality.” But I wonder what you make of the innumerable psychological and neuroscientific studies that demonstrate the problematic nature of human perception of time over short intervals? Benjamin Libet’s apparent prediction of conscious decisions from unconscious brain activity seems particularly troubling. Might you be persuaded to push in the direction urged by Dennett and resist such a conclusion by arguing that an instantaneous “you” cannot be contrasted with your slow-moving brain activity, and that the search for free will and consciousness in “the present moment” is fundamentally misguided? Can we not look, instead, to the mechanically-possible processes of decision making, learning and adaption that take place over seconds, minutes, weeks and years?

LS: I don’t see why grounding human capabilities in an understanding of what we are as natural beings implies that every capability we have is shared with rocks. We have a physical understanding of metabolism, or the immune system, but rocks and tables have neither. My guess is that when we know enough to seriously address these issues, the vocabulary of concepts and principles at our disposal will be greatly enhanced compared to what we have now. Certainly we are aspects of nature and every capability we have is an aspect of the natural world.

Regarding climate change, the first is a prediction of what could happen if we don’t take action to strongly reduce GHG emissions. My point is not that the climate models are completely accurate. My point instead is that the intrinsic uncertainties in their projections are the strongest reason to act to reduce emissions so we can avert disaster however the uncertainties develop. In national defense we prepare for war because the future is uncertain. Climate change is not an environmental issue, its a national security issue and should be treated as such.

As for the objections from neuroscience, I completely fail to see the force in this kind of argument. Those studies are fascinating but I don’t think they remotely show what is claimed. Certainly the present moment is thick and the self is not instantaneous. But giving up the instantaneous moment for the thick and active or generative present (as I sketched above) does not imply that consciousness or time or becoming are illusions.

AO: Lee Smolin — thank you!

_____

[1] Adam’s Opticks.

[2] Time Reborn: From the Crisis in Physics to the Future of the Universe, by L. Smolin, Houghton Mifflin Harcourt, 2013.

[3] For a good introduction to the basic ideas of Time Reborn, see this video.

[4] On Time Reborn as modern myth: Why Lee Smolin may be right about physics (but probably wrong about free will, consciousness, computers and the limits of knowledge), by Joe Boswell, 5 October 2013

[5] The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy, by R.M. Unger and L. Smolin, Cambridge University Press, 2014.

[6] The Universe as a Process of Unique Events, by M. Cortês and L. Smolin, arXiv.org, 24 July 2013.

[7] For a more critical take on the idea of cosmological natural selection see: Is Cosmological Natural Selection an example of extended Darwinism?, by M. Pigliucci, Rationally Speaking, 7 September 2012.

[8] Freedom Evolves, by D.C. Dennett, Viking Adult, 2003.

###

Joe Boswell is a writer and a musician trying to figure out how to make a living in a world where words and music are free. He has a degree in English literature, but having learned to bluff philosophy by listening to lots of podcasts, he enjoys picking fights with eminent scientists and philosophers on his blog, Adams Opticks (https://adamsopticks.wordpress.com). His songs are available on Bandcamp (https://joeboswell.bandcamp.com). He does Twitter too (https://twitter.com/joeboswellmusic).

Lee Smolin is is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of Physics at the University of Waterloo and a member of the graduate faculty of the Philosophy department at the University of Toronto.

This post originally appeared here. Republished under creative commons license.

 

The post Interview: Lee Smolin and the Status of Modern Physics appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/26/interview-lee-smolin-and-the-status-of-modern-physics/feed/ 1
Review: Our Technological Identity Crisis by Colin Marchon http://hplusmagazine.com/2015/02/25/review-our-technological-identity-crisis-by-colin-marchon/ http://hplusmagazine.com/2015/02/25/review-our-technological-identity-crisis-by-colin-marchon/#comments Wed, 25 Feb 2015 20:02:00 +0000 http://hplusmagazine.com/?p=26768 Film student Colin Marchon delivers a nice 3 part series on transhumanist themes.

The post Review: Our Technological Identity Crisis by Colin Marchon appeared first on h+ Media.

]]>
Screen Shot 2015-02-25 at 11.40.29 AM Our Technological Identity Crises [sic] is a three part series on transhumanist themes that h+ Magazine readers will enjoy. 

The series includes interviews with Biotechnologist Raymond McCauley founder bio-hackerspace Biocurious and the Chair of Biotechnology at Singularity University, Transhumanist and Executive Director of the IEET James Hughes, Futurist writer Federico Pistono, and more. The series focuses around the important ideas of the self and human exceptionalism. What makes us human and what exactly defines who we are?

Screen Shot 2015-02-25 at 11.39.23 AMMany of the central projects and interests of transhumanism bring into question the conventional notions of humanness, individuality, personhood, and the nature of the self. And at the same time these notions are central to the success of transhumanist projects, for example cryonic preservation is seeking to preserve the person that resides in a body and not just preserve the biological functioning of the body without whatever it is that makes the living person who they are. But what exactly is that? 

Screen Shot 2015-02-25 at 11.39.54 AMPart 1, Identity, Engineered, focuses around an interview with McCauley and the possibilities of engineering our own biology.  The film offers a variety of introductory considerations but doesn’t explore some of the most radical recent research into engineering humans and biological system at large.

In Part 2, Marchon explores the virtual world and considers the merger of virtual and physical worlds into a mixed reality. In fact this is the world that we now inhabit. In this second part, Colin not only explores the meaning of this novel environment where physical and virtual objects interact and causally influence each other, he delves into the deeper personal and social implications of hosting our digital selves on sites owned by corporations and open to spying by governments or others. Notably, Yale’s Wendell Wallach wonders about the future implications of this generation’s relinquishment of privacy en masse.  This second part also includes well known NYU posthumanist philosopher Francesca Ferrando talking about virtual reality and Second Life. Unfortunately, this film doesn’t really delve very deeply into the full implications of the mixed reality universe where not only humans but seemingly ordinary objects will exist in complex mixed reality states that straddle the real and virtual worlds; it isn’t only humans that co-exist in both worlds.

Screen Shot 2015-02-25 at 11.38.34 AMIn Part 3, Identity, Digital, Marchon wonders how these ideas will change our notions about humanity. Marchon visits the NY Posthumanist Research Group and delivers what is one of the most accessible presentations of man-machine symbiosis I’ve heard to date. The idea that man plus machine can outperform machine or man alone is not covered frequently enough in the transhumanist literature. Marchon describes the emergence of Gary Kasparov’s man AND machine chess tournaments which reveal that the best chess player in the world is still a human, but it is a human enhanced with a machine. Marchon doesn’t explore the broader research literature here, but this result is borne out in a variety of other fields for example my research in the area of facial recognition demonstrated that man plus machine outperforms either man or machine alone.

This series asks a variety of challenging questions that transhumanists sometimes forget. Who are we and who will we become? What and who do we want to be?

Are men and machines different? How can we leverage differences to our mutual benefit?

What makes us special or do we just imagine our specialness? What will happen as we realize we aren’t so special after all? In the end the film asks us to consider what we imagine is the value of being human, and how this might change as we become transhuman and even posthuman beings.

Marchon delivers an erudite and accessible presentation that also has got a couple of minor glitches in spelling and audio to remind you that this is a film student project. I’m looking forward to seeing what Colin does in the future.

Watch all three parts below.

 

The post Review: Our Technological Identity Crisis by Colin Marchon appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/25/review-our-technological-identity-crisis-by-colin-marchon/feed/ 0
Two Interpretations of the Extended Mind Hypothesis http://hplusmagazine.com/2015/02/24/two-interpretations-of-the-extended-mind-hypothesis/ http://hplusmagazine.com/2015/02/24/two-interpretations-of-the-extended-mind-hypothesis/#comments Tue, 24 Feb 2015 19:17:42 +0000 http://hplusmagazine.com/?p=26720 I’m trying to wrap my head around the extended mind hypothesis.

The post Two Interpretations of the Extended Mind Hypothesis appeared first on h+ Media.

]]>

I’m trying to wrap my head around the extended mind hypothesis (EMH). I’m doing so because I’m interested in its implications for the debate about enhancement and technology. If the mind extends into the environment outside the brain/bone barrier, then we are arguably enhancing our minds all the time by developing new technologies, be they books and abacuses or smartphones and wearable tech. Consequently, we should have no serious principled objection to technologies that try to enhance directly inside the brain/bone barrier.

Or so some have argued. I explored their arguments in a previous post. In this post, I want to do something a little different. I want to consider how exactly one should interpret the claim that the human mind can extend into the external environment. To do this, I’m calling upon the help of Katalin Farkas, who has recently written an excellent little article entitled “Two Versions of the Extended Mind”. In it, she argues that there are two interpretations of the EMH, both extant in the current literature. The first makes a wholly plausible and, according to Farkas, uncontentious claim that can be endorsed by pretty much everyone. The second is rather more contentious and, arguably, more significant.

In the remainder of this post, I will go through both interpretations.

 

 

1. Some Key Concepts
Before I get to those interpretations, I need to run through a few key conceptual distinctions. First, I need to distinguish between different types of mental event. We all know what the mind is: it is that “thing” that thinks, feels, believes, perceives, dreams, and intends. Mental events are the events that happen within the mind, i.e. the thinking, feeling, believing, perceiving and so on. By describing it in this way, I do not mean to rule out the possibility that the mind is itself an extended set of events (and so not a “thing” per se). The mind could well be an extended set of events and still consist of sub-events like believing, perceiving, dreaming, intending and so forth.

Anyway, although there are many different mental events, they seem to fall into two broad categories:

Events in the Stream of Consciousness: As the name suggests, these are the mental events that form part of the subject’s occurrent conscious life. They include things like the taste of chocolate, the feeling of warmth, the perception of red and so on.

Standing Events: These are mental events that need not form part of the subject’s occurrent conscious life. The classic examples are beliefs and desires, which are generally taken to characterise a subject even when they are not directly conscious of them (they are taken to be dispositions). For example, I can be said to “desire a meaningful relationship with my children” even when I am asleep and not consciously representing that desire. (Farkas refers to these as standing “states” not “events”; her terminology may be more correct)

This distinction turns out to be important when it comes to understanding what is being “extended” when we talk about the extension of the mind. That is to say, it is important when it comes to understanding the content of the mental extension. It is, however, less important than the next distinction when it comes to understanding the two competing interpretations of the EMH.

That next distinction arises from the functionalist theory of mind. According to that theory, whether something counts as a mental event or not depends on the role that it plays in fulfilling some function. Thus, for example, something counts as a “belief” not because it is made of a particular substance (res extensa or res cogitans) but because it has a particular role in an overarching mental mechanism. Thus, it can be said to counts as a belief because it is capable of producing certain conscious states, action plans and decisions.

Functionalists distinguish between two things that are necessary for mental events/states:

The Mental Realiser: This is the object or mechanism that realises (i.e. constitutes) the mental event. In other words, it is the physical or mental stuff that the event is made out of.

The Mental Role: This is the position (or locus of causal inputs and effects) that something occupies in the mental system.

This distinction is important when it comes to understanding the method or nature of mental extension. In fact, a very simple way to understand Farkas’s main contention is that when it comes to extending the mind, there is a significant difference between realiser-extension and role-extension. The former is trivial, and can arguably be embraced by non-functionalists. The latter is more significant. Let’s try to see why.

2. The Trivial Interpretation: Extending Mental Realisers
As mentioned in the introduction, the gist of the extended mind hypothesis is that the mind can extend out beyond the brain/bone barrier. There may be sound evolutionary reasons for our minds to be limited to that space, but according to proponents of the EMH there is simply no good, in-principle, reason to suppose that the mind has to remain confined to the three and half pound lump of squidgy biomass that we call the “brain”.

The easiest way to interpret that claim is to interpret it as a claim about mental realisers, i.e. as a claim that mental realisers can extend beyond the skull:

Extended Mind Hypothesis (1): The physical basis for mental events can extend beyond the boundaries of our organic bodies

Farkas appeals to an example used by Andy Clark (one of the original proponents of the extended mind hypothesis) to illustrate this version:

Diva’s CaseThere is a documented case (from the University of California’s Institute for Nonlinear Science) of a California spiny lobster, one of whose neurons was deliberately damaged and replaced by a silicon circuit that restored the original functionality: in this case, the control of rhythmic chewing. (…) now imagine a case in which a person (call her Diva) suffers minor brain damage and loses the ability to perform a simple task of arithmetic division using only her neural resources. An external silicon circuit is added that restores the previous functionality. Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system.

(Clark, 2009 – quoted from Farkas)

Here, the hybrid bio-technological system constitutes an extended mental realiser for the performance of mental arithmetic. The word “constitutes” is important. The claim is not merely that the extended system causally precedes the mental event; it is that the extended system either is or grounds the mental event. What’s more, this claim applies to standing states just as much as it applies to events in the stream of consciousness. Indeed, in Diva’s case it is a mental event in the stream of consciousness that is getting its realiser extended beyond the brain/bone barrier.

Farkas argues that this version of the EMH is fairly trivial. Indeed, she goes so far as to say that non-functionalists and some dualists may be able to embrace it. All it is saying is that if some physical realiser is necessary for mental events (and many theories of mind accept that a physical realiser is necessary, even if it is not sufficient) then there is no reason to think that the realiser has to be made up of neurons, or glia or whatnot. Not unless you think that neurons have some magical mentality-exuding stuff attached to them.

3. The Significant Interpretation: Extending Mental Roles
The more significant interpretation of the EMH claims that more things can count as performing a mental role, even when they seem remarkably distinct from what traditionally seems to perform that role. As Farkas sees it, this is largely a claim about what counts as a standing mental state and, more precisely, as a claim about the possibility of extending the set of things that can count as a standing mental state.

Extended Mind Hypothesis (2): “the typical role of standing states can be extended to include states that produce conscious manifestations in a somewhat different way than normal beliefs and desires do.”

It will take a little longer to understand this version, but we can start by looking at the most famous thought experiment in the debate about the extended mind. This is the Inga vs Otto thought experiment from Chalmers and Clark’s original 1998 paper:

Inga and Otto: Imagine there is a man named Otto, who suffers from some memory impairment. At all times, Otto carries with him a notebook. This notebook contains all the information Otto needs to remember on any given day. Suppose one day he wants to go to an exhibition at the Museum of Modern Art in New York but he can’t remember the address. Fortunately, he can simply look up the address in his notebook. This he duly does, sees that the address is on 53rd Street and attends the exhibition. Now compare Otto to Inga. She also wants to go to the exhibition, but has no memory problems and is able to recall the location using the traditional, brain-based recollection system.

The essence of Chalmers and Clark’s original paper was that there is no significant difference between what happens in Otto’s case and what happens in Inga’s case. They can both be said to “believe” that the Museum of Modern Art is on 53rd Street, prior to “looking up” the information. It just so happens that in Otto’s case the recollection system extends beyond his brain.

Farkas argues that this is a very different type of extension when compared with that of realiser-extension. In this instance, it is not simply that the notebook replaces the brain with a functionally equivalent bio-technological hybrid; it is that the notebook mediates the recollection process in a very different way. Consequently, to say that Otto’s mind extends into the notebook is to say that we should be more liberal in our understanding of what kinds of system can count as fulfilling a mental role.

To see this, it helps to consider some of the important differences between the recollections of Otto and Inga. First, note how they are phenomenologically distinct. Inga gains access to the relevant information through direct mental recall, not mediated by any other sensory process; Otto needs to literally see the information written down in his notebook before he can be said to “recall” it. Second, note how Inga’s “belief” is more automatically integrated with the rest of her mental system than Otto’s. If Inga learns that she got the wrong address, this will affect a whole suite of other beliefs and desires she might have had. In Otto’s case, learning that he has the wrong address will simply involve deleting the entry from his notebook and correcting it. This will not immediately affect other entries in the notebook that relied on the same information.

One could point to other differences too but these suffice for now. Some people argue would argue that these differences should lead us to re-evaluate the Inga-Otto thought experiment. In particular, they should lead us to say that Otto does not really believe that the Museum of Modern Art is on 53rd Street and that Inga does. The problem is that proponents of the EMH can come back and highlight how focusing on phenomenological and integration-based differences between Otto and Inga can affect how we interpret other cases. For example, the phenomenology of recollection varies greatly from case to case. I remember all of Macbeth’s “full of sound and fury” monologue from Act 5, Scene 5 of Macbeth. But in order to remember the fifth line (“Til the last syllable of recorded time”) I actually need to speak, out loud, the first four lines. That does that sensory intermediation deny me the status of an ordinary mental recollection? Likewise, with respect to automatic-integration, it is possible that Otto could have a “smart” organiser that automatically updates other entries with the new information. This is something that is increasingly a feature of smart devices with cloud-based syncing.

And this is where the EMH becomes significant. By responding to critics of their position in such a manner, proponents of the EMH are arguing that we should be much more ecumenical when it comes to determining what can count as a standing mental state. That’s what EMH (2) is claiming. Furthermore, this time round the claim is that the extension is limited to standing states and does not also encompass events in the stream of consciousness. There is a good reason for this. As Farkas sees it, EMH (2) isn’t really about physical expansion outside the brain-bone barrier in the way that EMH (1) is. For all that proponents of EMH (2) care, the notebook-lookup system could be located entirely within the confines of Otto’s skull. That wouldn’t make a difference to their claim. What matters for them is that we are less restrictive when it comes to determining what counts as a providing the basis for a mental standing state.

4. Conclusion
Where does that leave us? Well, it leaves us with two versions of EMH. The first is relatively straightforward and simply claims that mental-realisers need not be confined to the brain. The second is more contentious and claims that we should have a more expansive conception of what can count as a standing mental state.

To really understand the significance of the second version it helps if you consider Chalmers and Clark’s original criteria for assessing whether something could count as standing state. They suggested that anything that was readily accessible and automatically endorsed could form part of the extended mind loop that constituted a belief or desire. This could include something like the information in Otto’s notebook, the information stored on the Web, and also, more controversially, the information stored in someone else’s head. Imagine a really close couple who are in the habit of relying upon the mental content stored in their partners’ heads. In many ways, they would be just like Otto and his notebook.

But if it is possible for all such closely connected information-exchange partnerships to form part of an extended mind, we will find ourselves in some pretty tricky ethical and social waters. What does it mean for privacy? Individual autonomy? Responsibility and blame? Praise and reward? All of these concepts would need to be revised if we fully embraced the implications of EMH (2). That is some serious food for thought.

###

John Danaher is an academic with interests in the philosophy of technology, religion, ethics and law. John holds a PhD student specialising in the philosophy of criminal law (specifically, criminal responsibility and game theory). He formerly was a lecturer in law at Keele University, interested in technology, ethics, philosophy and law. He is currently a lecturer at the National University of Ireland, Galway (starting July 2014).

He blogs at http://philosophicaldisquisitions.blogspot.com and can be found here: https://plus.google.com/112656369144630104923/posts

This article previously appeared here. Republished under creative commons license.

The post Two Interpretations of the Extended Mind Hypothesis appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/24/two-interpretations-of-the-extended-mind-hypothesis/feed/ 0
Will Super-intelligences Experience Philosophical Distress? http://hplusmagazine.com/2015/02/23/will-super-intelligences-experience-philosophical-distress/ http://hplusmagazine.com/2015/02/23/will-super-intelligences-experience-philosophical-distress/#comments Mon, 23 Feb 2015 17:37:33 +0000 http://hplusmagazine.com/?p=26709 Will super-intelligences be troubled by philosophical conundrums?

The post Will Super-intelligences Experience Philosophical Distress? appeared first on h+ Media.

]]>
hplus_article_photoWill super-intelligences be troubled by philosophical conundrums?1
Consider classic philosophical questions such as: 1) What is real? 2) What is valuable? 3) Are we free? We currently don’t know the answer to such questions. We might not think much about them, or we may accept common answers—this world is real; happiness is valuable; we are free.
But our superintelligent descendents may not be satisfied with these answers, and they may possess the intelligence to find out the real answers. Now suppose they discover that they live in a simulation, or in a simulation of a simulation.  Suppose they find out that happiness is unsatisfactory? Suppose they realize that free will is an illusion? Perhaps they won’t like such answers.

So super-intelligence may be as much of a curse as a blessing. For example, if we learn to run ancestor simulations, we may increase worries about already living in them. We might program AIs to pursue happiness, and find out that happiness isn’t worthwhile. Or programming AIs may increase our concern that we are programmed. So superintelligence might work against us—our post-human descendents may be more troubled by philosophical questions than we are.

I suppose this is all possible, but I don’t find myself too concerned. Ignorance may be bliss, but I don’t think so. Even if we do discover that reality, value, freedom and other philosophical issues present intractable problems, I would rather know truth than be ignorant. Here’s why.

We can remain in our current philosophically ignorant state with the mix of bliss and dissatisfaction it provides, or we can become more intelligent.  I’ll take my chances with becoming more intelligent because I don’t want to be ignorance forever. I don’t want to be human; I want to be post-human. I find my inspiration in Tennyson’s words about that great sojourner Ulysses:

for my purpose holds
To sail beyond the sunset, and the baths
Of all the western stars, until I die.
It may be that the gulfs will wash us down:
It may be we shall touch the Happy Isles …

I don’t know if we will make a better reality, but I want to try. Let us move toward the future with hope that the journey on which we are about to embark will be greater than the one already completed. With Ulysses let us continue “To strive, to seek, to find, and not to yield.”

________________________________________________________________________

1. I would like to thank my former student at the University of Texas, Mr. Kip Werking, for bringing my attention to these issues.

###

John G. Messerly, Ph.D taught for many years in both the philosophy and computer science departments at the University of Texas at Austin. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of futurism and the meaning of life at reasonandmeaning.com

The post Will Super-intelligences Experience Philosophical Distress? appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/23/will-super-intelligences-experience-philosophical-distress/feed/ 1
Onion.City: Access the Deep Web From A Browser http://hplusmagazine.com/2015/02/23/onion-city-access-the-deep-web-from-a-browser/ http://hplusmagazine.com/2015/02/23/onion-city-access-the-deep-web-from-a-browser/#comments Mon, 23 Feb 2015 17:28:22 +0000 http://hplusmagazine.com/?p=26705 Onion.City the new search engine specialized for the black markets in the Deep Web simply accessible from a common browser.

The post Onion.City: Access the Deep Web From A Browser appeared first on h+ Media.

]]>

Onion.City the new search engine specialized for the black markets in the Deep Web simply accessible from a common browser.

We have described several times the not indexed portion of the web known as the Deep Web, an impressive amount of content that the majority of netzens totally ignore.

Deep Web is also known for the anonymity it offers, for this reason amount the actors that daily use it there are groups of cyber criminals.

Law enforcement and Intelligence agencies are spending a great effort trying to de-anonymize users on the Deep Web and indexing its content, recently the Defense Advances Research Projects Agency (DARPA) has publicly presented the Memex Project, a new set of search tools which will improve also researches into the “Deep Web”.

Recently a new search engine dubbed Onion.City is appeared on the Surface Web, it’s a Google like tool that allow easily search for content on the Deep Web.

Onion.City is a new search engine for online black markets that allow users easily find and buy illegal goods in the underground.

Onion.City 2

It seems very easy to buy drugs, stolen credit cards and weapons by using only common browsers, including Chrome, Internet Explorer or Firefox, without installing and browsing via the Tor Browser.

The Onion.City Deep Web search engine was presented by Virgil Griffith onto the Tor-talk mailing list, the tool is able to search content related to nearly 650,000 pages on the Tor network displaying results in a normal browser.

Onion.City search engine is based on Tor2web proxy, the author exposes all the tor2web onions on his sitemap, so Google is able to crawl them and index the content.

“Everything available on the Google Custom Search is also available on a regular google search with the qualifier: “site:onion.city”” said Griffith.

The actual solution of Onion.City, as explained by the author, represents the suboptimal because clients connect directly to Google.

“Alas no. I’m aware this is suboptimal. I see GOOG search engine as a temporary-ladder just to get the ball rolling. I am open to using any other index. For what it’s worth I’m very pleased with GOOG’s performance—right now it’s searching an index of 650k onion pages and the number grows every day.”

Another issue debated on about Onion.City it that the search engine uses only HTTP form, this means that it lack of traffic encryption exposing users to eavesdropping.

“It’s especially crazy if you allow your clients to submit HTTP forms over onion.city, since it basically means that onion.city gets to see *all*the usernames and passwords. I bet there are many people out there who don’t really get the tor2web threat model, and it’s nasty to read their passwords.” said one user in the discussion. 

Griffith explained that Onion.City doesn’t maintain logs of user’s traffic, but he understands the concerns of users, unfortunately, he hasn’t sufficient funds at the moment to implement HTTPs.

Onion.city isn’t the first ever Deep Web search engine, last year appeared on the Surface web Grams, the first search engine specialized in black markets.

I close with a curiosity, looking the Frequently Asked Questions (FAQs) on Onion.City website the author explains that users can report content that may be illegal.

Enjoy Onion.City …

###

Article originally appeared here. Republished with permission of the author.

The post Onion.City: Access the Deep Web From A Browser appeared first on h+ Media.

]]>
http://hplusmagazine.com/2015/02/23/onion-city-access-the-deep-web-from-a-browser/feed/ 1