Sign In

Remember Me

A Dream of Robot’s Rights

Walking down 33rd Street, under the shadow of the Empire State Building, Frank Capri heads towards his usual Starbucks on the corner of Fifth Avenue while a multitude of human bodies pass him by. Some of these people are tourists; some of these people have a myopic fix on getting to work.

Capri makes small talk to the familiar barista, then takes his chai latte to a window table where he sits alone and ruminates about whether all this human interaction will disappear in the near future as robots will have a bigger role in society.

There is no idling by Capri as he sits down. His mind is processing scenarios of sentient robots being abused as slaves and he is fearful that another Civil Rights battle might erupt.

“As of 2011, the emphasis in robotics has been to make robots functional as mechanical servants, but soon robots will possess both thought and feeling,” said Capri. He quotes Erich Fromm, the 20th century social psychologist and humanistic philosopher, that the rational being has a balance of thought and feeling, and that either thought or feeling alone is irrational.

Capri foresees the programming of thoughts and feelings into robots as the next big step in robotic evolution and he is concerned humans won’t recognize a robot’s feelings.

In the next 20 years, Capri envisions that robots will be sentient and that they’ll need protection. “This is where I would draw the line and call for a Bill of Rights for Robots.” He added, “We, as humans, need to exercise our sense of empathy toward the robots we are creating, and robots should be programmed with a sense of empathy toward us and each other.”

In the ‘60s Capri witnessed the struggles African Americans went through to gain civil rights and the struggles of women for equal rights. Now Capri’s knowledge as a cosmologist-futurist has him worried that as robots become more advanced, humans will fail to realize that robots are more than machines to simplify their lives.

“As we program robots at higher levels, I’ll be lobbying for programming an ethic of empathy.” More empathy is something humans could benefit from as well, believes Capri. “As humans, with at best a shaky record when it comes to avoiding war and harsh prejudices towards one another, we could do with some reprogramming ourselves.” The stronger the empathy, the less likely one’s tendency toward violence as a means of solving problems, explains Capri. “The hope of the future is not technology alone,” Capri adds. “It’s the empathy necessary for all of us, human and robot, to survive and thrive.”

The merging of humans and robots will be happening at warp speed and Capri is concerned that without preemptive laws in place to protect both parties it will be difficult to implement laws and Rights retro-actively. Having laws in place will benefit both people and robots and will improve interaction. He suggests it might already be time for the United Nations to begin contemplating these laws, so that robots are universally protected.

“The evolution of robots is inevitable,” Capri states forebodingly. The line between human and machine is already beginning to blur, and Capri wonders what will life be like for people who have had limbs and human features replaced by robotic parts. Humans will become more robotic as robots become more human. Capri sees the decay of personal contact growing around him.

Surrounding him are people locked into the zone on their smartphones connecting to a virtual world with impersonal interaction.

Capri welcomes techno advances but cautions that we not lock ourselves in electronic exile and lose face-to-face communication. Capri is cautiously optimistic about the growth of human-robot interaction. “Yesterday I had a dream of walking into a future Starbucks. Robots and humans were sharing tables and reminiscing about the struggle for robot civil rights, and talking about robots who would be running for public office.”

In the meantime, Capri’s question to us all is, “Are we ready for robots, and are robots ready for us?”

Kevin James Moore is a freelance writer and former United Nations correspondent. Frank Capri is a photojournalist and theoretical astro-physicist based in Manhattan; www.frankcapri.com. Photo by Frank Capri, portrait of Capri by Aya-Kawanaka.

12 Comments

  1. i will grant you that someone somewhere will eventually program drives and emotions into a machine. if only because they want to do it.

    most machines will deduce and induce without emotions or drives. that is our pair of pliers.

    if someone does decide it is somehow advantageous to allow motivations into a machine maybe they should be branded a criminal.

    thought can occur without emotions and drives. why do we need that? your computer does what you ask it to within its capabilites. isnt that what you want. really? do you want some smart ass telling you it is busy thinking and cant be bothered. or is dreaming of sex with a human. what kind of mental illness would a machine have and what would be the repercussions?

  2. I think the hardest part is to understand the similarity between biologically evolved urges (which people will define as ‘real urges’) and technologically evolved urges, which may seem/be very, very different.
    (The same as with intelligence: AI is much smarter in some areas, stupider in others – it is just very different.)
    And differences are very hard to negotiate for us people: in civil-rights conflicts and wars, people tend to see each other as different = the enemy, even when they are not… So when the differences are so much more real, it will be even harder. People will denie the urges of ‘the others’ (as we’ve done time and again…)

    How do we recognize the point where a teleolological entity starts to really have needs? A robot needs energy to stay active. I don’t think they are anywhere near to feeling something now. But with evolved programming etc. we are less and less in the know of what’s really going on. Somewhere borders may/will be passed without us noticing. So I agree we should definetely work on stretching our empathy.
    Maybe they won’t feel but will acquire another kind of consciousness (like Penroses Orchestrated Objective Reduction of Quantum Coherence?), which we don’t understand/recognize at all.

  3. @Jonathan: I agree.

    Humans are essentially electrical impulses and chemical reactions, yet somehow there’s something it’s like to be us (there’s a qualia to life, to use a philosophy term.)

    So, we have one working model of chemistry turning into biology, including the fullness of experience that we all have. I don’t see why the same sort of full experience isn’t possible for robots and other beings as well.

    You’re right, too, about the danger of looking at these beings as property or slaves instead of fully people. We’ve made that mistake once (arguably more than once) and there’s a decent case to be made that we can prevent making it again.

    I plan to be one of those people making arguments to consider sufficiently advanced robots people (I’m actually working on a paper to that end right now.)

    • evolution of biological entities is driven by drives and motivations. they evolve in geological slowness. primarily by mistakes in genetics.

      the driving force of evolution is to predict the future. the power to predict is the answer to domination of prey and predators. to continue the evolution “to our benefit” is to direct the evolution of the machine mind. that is what we have done and that is what we must continue to do untill we find a specific reason to do otherwise. in the meantime if you want a companion there are tons of them out there. probably cant buy one garranteed to be your friend or sex partner though.

      • and even if you did instill drives what makes you think it would want to communicate with a mind that would compare itself to you as you compare yourself to a chigger.

        • Evolution might be driven by drives and motivations, but what I’m getting at is more fundamental. At some point, we went from otherwise inanimate elements to beings with drives and motivations. We’re a working example of unconscious atoms turning into conscious organisms. If we can do it, there doesn’t seem to be any reason to think machines couldn’t also. So they, too, might be able to go from ‘a pair of pliers’ to conscious beings with motivation and drives.

          I doubt we’re going to start with super-intelligent AI, however. That’s like the Wright Brothers building an F-22; there’s a lot of steps and improvements to make first. We might look at -it- as a chigger at first (but it’s important to recognize, somehow, when it’s gone from an RC Car to a chigger-minded car.) Subsequent versions might get smarter much more quickly; something as smart as a dog, then a dolphin, then a child, then a person, then smarter. Ideally, we’ll be augmenting ourselves at the same time so that there’s never the disparity in intelligence that you suggest.

    • Thanks John, and well-put.

      rd, I think your reasoning is internally pretty consistent; the problem, though, is that we are already actively pursuing AI with emotions, AI that will be self-aware, and smarter-than human AI, not to mention the various projects underway that are reverse-engineering the human brain. Whether it’s wise or not, it is inevitable now.
      When a simulated human brain is booted from one of these projects, we’ll have a serious problem.

  4. That, rd, is exactly the kind of thinking that will steadfastly ignore the arrival of artificial, man-made beings with qualities such as sentience, a drive to improve upon themselves, and to seek out their rightful place among a community of equals.
    If we create these beings, they will be property, unless we rule otherwise. And the word for people who are property is slaves.

    • quote: a drive to improve upon themselves

      how will something with no drive obtain the drive to improve itself?

      the answer to the rhetorical is it wont. it will do what it is told as all machines will do. if it is told to increase its awareness that can be done, if it is told to design drives that must be carefully considered as to its repercussions such as our extinction.
      we are approaching the goose that will lay the cornucopia. why risk instilling emotions or drives that are not required.

      if you want a date go to match dot com.

      slave is also a term used to describe many submachines.

      these advanced ai’s will not be property of individuals. they will be ubiquitous and virtually invisible. we will prosper as long as we retain the deductive end of reason and leave the inductive to the machines.

      if you accept that premise we are in the singularity now.

  5. robots are not biological. they are not motivated because they have no urges. witihout motivation a thought would not be completed, no mission accomplished

    they are tools like a pair of pliers. they will do what we motivate them to do. they will have no fear, because their minds are logic and know they cannot die, they will not love because they have no need for love or sex,

    emotions are biology based and to instill them in a robot would be as senseless as instilling motives and drives in a pair of pliers or garden shears.

    so then they dont need rights because they wont care if they have rights.

    “Work becomes unpleasant enough there will be economic slaves. When there are slaves, managing the slaves eventually becomes unpleasant. If the slaves manage themselves, seeing them becomes unpleasant”

    • I begged it differ. Pliers themselves can’t even compute. These urges however are impulses that stem from interactions of chemicals in the nerve fiber stimulating or inhibiting a muscles, glands, and nerve cells. That impulse is reproduced with electro-active polymers, magnets, or (if you have a modern phone) by using your finger to act as the nerve fiber and pushing across the screen’s sensor to unlock it. Stimulating a reaction. The programmed natural urge of the phone is to comply with the stimuli. Just as it was programmed into your psyche to instinctively deny the rights of robos. What makes you special is your capability to challenge those programs you’ve inherited from your environment by understanding objectively what’s right and wrong.

Leave a Reply