Sign In

Remember Me

Teaching “Humanity+” to Students

I often get asked by undergraduates, “How can someone be posthuman?”  Whenever I hear this question, I give the student an invisible gold star.  Leaving the linguistic squabble aside over defining the variety of humanisms (anti, neo, trans, post), the primary issue concerns what this concept of representing “the human” entails.

I usually respond with my own question, “What, exactly, is humanity+”?  The college course I teach on representing the human through popular and literary science fiction texts centers on this sort of reimagining of humanity+.  It typically ends with even more questions, such as, “Is it better to view the posthuman as science fantasy?” or “Won’t we always just be human?”

The course, Technoscience and The Human, focuses on a few key texts featuring complex representations of individuals fully enmeshed in technological worlds.  Before jumping into Blade Runner, Neuromancer, Battle Star Galactica, Brave New World, Ghost in the Shell, etc., a quick historiography of the concept of modern humanism helps students understand how new the concept is.  “Humanism” as a category emerged in 19th century Europe.  Thinkers at that time imagined what “the human” was to the ancient Greeks through readings of earlier Renaissance umanisti (teachers/scholars) in their encounters with the past.  For example, Burkhardt’s liberal, humanist subject helped create the idea of the modern individual at the moment society was drastically being changed by the Industrial Revolution.  The human and technology have been wed ever since.

However, from its inception as a construct, humanism was suspect.  Nietzsche’s project of irrationality gave us the first fully articulated anti-humanism in his challenge to bourgeoisie values, while Freud continued by decentering the self as the seat of rationality.  We can skip through the 20th century and see a variety of challenges to standard humanism, from the structuralist’s removal of the human for the love of system, to the evangelical theists’ insistence that humanism is an evil that removes god from the center of society.

With the rise of the concept of transgressing, transcending or transforming the human with technology, we see a shift from conversations centered in the humanities to those coming from the life and social sciences.  Contemporary philosophers of technology like Don Ihde and Andrew Pickering have provided metaphors of lifeworlds and mangles to articulate the complexities of how science and technology and society intersect.  With the speculations of academics, futurists, technologists, science fiction writers, and fantasists, the notion of humanity+ allows us to imagine encountering the human changed by technology.

On that hopeful note, a jump into the texts provides a look at what contemporary imaginative thinkers see as important in the intersection of technology and the human.  A good place to begin is with two of the 20th century’s most important dystopian satires: George Orwell’s Nineteen Eighty-Four and Aldous Huxley’s Brave New World.

As Francis Fukuyama notes in Our Posthuman Future, these two novels help us gauge our responses to two imagined and opposite frightening effects of technology: the threat of total war or total peace.  Of course, neither of these truly happens in either novel.  But Orwell’s text represents what happens when a nearly totalitarian society reduces the liberal human subject to an object with little rationality or human love.  The Party destroys Winston’s ability to disbelieve that 2 + 2 = 5, and forces him to betray his love, Julia.  In the end, they conquer his mind and heart, and the novel ends with him loving Big Brother.

This dehumanization theme also works well as an introduction to the human representations in Brave New World, because in this novel the challenge to what we value as human beings is different than within Orwell’s novel.  With Huxley’s characters, the scientifically organized society of the World State has cured old age, disease, senility, crime, depression, hunger, etc., but the cost is high.

With conditioning-induced happiness, fueled by constant drug use (soma), the people of the World State have lost the ability to feel intensely.  They have found happiness but lost passion.  This critique of American culture tells us more about Huxley’s mid-century fear of the influence of radio, TV and bubblegum than it does about the real threats of biotechnology, but the representation of human beings as happy-but-hollow is the other side of the Orwellian nightmare.

Dehumanization, either through hardship or enhancement, runs through many of the texts within the human/technology intersection.  This duality can be seen in key scenes in the latest Battlestar Galactica, in which humans must contend with Cylons who are their betters or equals, to representations of the Borg in Star Trek: The Next Generation, Voyager, and the film First Contact.  Even though the antihuman Borg are slightly humanized as the shows progress, they represent almost a clichéd idea of the horrors of the machine in which the individual is subsumed within a collective of technology.  Moreover, in William Gibson’s Neuromancer, Case is already dehumanized, lost in a world of crime and mundane flesh, when he encounters two parts of a super-AI (Wintermute and Neuromancer).  Also, in Anthony Burgess’ A Clockwork Orange, the recidivist Alex loses his incorrigible humanity when the tools of science work to cure him.  The list goes on, each providing arresting images of how technology affects what it means to be a human being.

In one of the key scenes of the course, the synthetic replicant Roy Batty in Blade Runner confronts his maker Dr. Eldon Tyrell.  Before giving him a kiss of death and crushing his face with Luciferian fury, Batty asks him, “Can the maker repair what he makes?”  He wants more life than the few years afforded to him.  The theme of dehumanization has been transferred to humanity’s technologies.  The enhancements created to help humanity with off-world labor have come back seeking justice.  When Batty learns there is no chance to extend his life, he exacts his vengeance on the frail old man.  This violence reflects a very human rage, one that we identify with and understand later when Batty sits dying, reminiscing about seeing “attack ships on fire off the shoulder of Orion.”

What this course reinforces is that many representations of technology and the human demonstrate fears and hopes about our own humanity, or lack of.  True posthuman representations like impersonal collectivities or super-intelligences often tell us what we value in ourselves, and what we despise.  In this way, these texts act as current correctives, as well as future warnings.  Humanity+ then becomes a concept for this clarification and intensification of the very parts of us that we consider human, even as technology changes how we experience the world.  In the end, humanity+ is as much a constructed concept as is modern humanism.  And the construction is taking place today.

 

 

4 Comments

  1. “You should not decide that your utopie is the utopie for everyone else.”

    Unabomber, DAS NETZ

    You should not force, or decide for someone else

    Let say i construct ai and robots, and i take everyone jobs in the world ( and the universe ) , then i “kill” everyone : because the “job/salary” doesn’t existe anymore :and I don’t want people to get “my capital”,

    “i want them to die”

    WELCOME TO YOUR BRAVE NEW,

    WHERE HUMAN BEING, AND “UNSLAVED BEING” ARE NOTHING : and should die for the “system”

    Take care of you

  2. “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time… like tears in rain… Time to die.”

    I come back to this image often in my mind. I think you’re right… it very much helps clarify and intensify the parts of us that we consider human. It’s ironic that it’s in the moment of Batty’s death that he becomes most identifiable as human.

  3. Robots and AIs are developed massively and parrallelly throughout the world. (The very purpose of H+ is to make them as popular as possible – not in some close military facilities, but in every home. The risk is not in technologues per se, but in people not understanding how they can use them.) No one can just “take all the jobs”. In a long term everyone can own such a robot like we all now have mobile phones. So relax, and throw your Luddite stuff back into 18th century:)

  4. Robots and AIs are developed massively and parrallelly throughout the world. (The very purpose of H+ is to make them as popular as possible – not in some close military facilities, but in every home. The risk is not in technologues per se, but in people not understanding how they can use them.) No one can just “take all the jobs”. In a long term everyone can own such a robot like we all now have mobile phones. So relax, and throw your Luddite stuff back into 18th century:)