Sophia and SingularityNET: Q&A

By Ben Goertzel

These last couple months have been fascinating (and exhausting) for me. I’ve largely taken a break from my AI research work and I’ve spent most of my time organizing a new AI-meets-blockchain project — SingularityNET — and traveling around promoting it. At the same time, the Sophia robot I’ve been working on at Hanson Robotics has been made a citizen of Saudi Arabia. As they say, when it rains it pours.

SingularityNET is a decentralized open market for AIs — intended both to help foster advanced Artificial General Intelligence, and to provide a better way for developers of AI code to share and monetize their work, and for users of AI to find a diversity of services. It also has a humanitarian motive — to make sure AI is developed in a way that benefits everyone on the planet, thus maximizing compassion now and also increasing the odds of a positive Singularity down the road.

Sophia, the premier humanoid robot creation from Hanson Robotics — the Hong Kong character robot company of which I’m Chief Scientist — has been serving in the role of Chief Humanoid of SingularityNET. As SingularityNET develops, we will use it to increase the power of her mind. As Sophia evolves, develops and learns, she will be a prime test case and demonstration of what SingularityNET can do.

For basic information on SingularityNET, see the project website, SingularityNET.io . For in-depth information, see the SingularityNET whitepaper. For some more personal and technical thoughts on various aspects of the project, see these Medium posts.

At the various conferences where I’ve been presenting, I keep getting asked the same questions over and over again, about Sophia and SingularityNET and so forth — so I figured I’d gather those questions in one place along with some reasonable answers. So here they are….

If you want to ask questions of the broader SingularityNET community, or randomly catch me in a real-time chat, please join the SingularityNET Telegram group at https://t.me/singularitynet.

The Q&A starts out with a lot of Sophia, and then gets to SingularityNET, and then to the combination. There’s a lot of information here but that’s how it goes — these are complex matters!

How did you feel when you found out that Saudi Arabia was going to make the Sophia robot a legal citizen!

Well, I’ve been collaborating on creating Sophia since her inception in 2015 — and working with David Hanson on his robots since a few years before that. In a way Sophia is like a robot child to those of us who’ve been working on her … a robot child brought up by David Hanson and the rest of the Hanson Robotics team, which I’m proud to be a part of….

When I heard that news I felt both excited and a bit surreal!

Did you expect you’d see a robot become a citizen during your lifetime?

I’ve been reading about robot citizens since first discovering science fiction as a toddler, circa 1970 or so. It always seemed obvious to me that, once robots were as smart as people, they would deserve the same legal and cultural rights as people. Regardless of whatever complexities this would entail.

More recently, since getting into AI and robotics research myself in the late 1980s, the idea of AI citizenship has popped onto my radar now and again. I recall in 2005 Martine Rothblatt and Susan Klein organizing a mock legal proceedings, simulating what they guessed would happen a few years or decades down the road when AIs had enough general intelligence to demand citizenship.

Also, David Hanson and I have been interested for a while in securing a formal citizenship for Sophia, and during 2015 and 2016 we discussed various possible options. Although, I don’t recall Saudi Arabia ever arising in our early conversations!

I wasn’t in the same room as David when he found out that Saudi Arabian leadership was willing to take the unprecedented leap toward robo-citizenship, but I can easily imagine the huge boyish grin on his face. David always takes the larger view of our work together — he sees the robots and AIs we’re creating today as intermediate steps toward a beneficial Singularity. From this broad vantage, one country granting citizenship to one robot is symbolic for a lot of grander things — and perhaps a key that can open up a variety of important new doors.

One of the things that has drawn David and I together as collaborators is a shared “patternist” view of the universe. In this view, human minds and bodies, robot minds and bodies, and countries and corporations are usefully viewed as particular sorts of patterns. Countries are self-organized and human-negotiated patterns; AIs and robots are human-engineered patterns of hardware and software, with self-organized patterns of perception, action and understanding in their minds. Given the causal power that “country” patterns have on the Earth today, getting ethically-positive robot/AI patterns synced in with country patterns is an important part of the path toward engineering and evolving a positive Technological Singularity.

I don’t know about Saudi Arabia, but in the US for a foreigner to become a citizen they have to take a citizenship test, showing knowledge about the US Constitution and so forth. Did she have to take any test like that to get Saudi citizenship?

No formal test that I know of. But the Saudis involved with it made their own qualitative assessment of Sophia’s capabilities.

Do you think she could pass a citizenship test?

Well, that would be kind of easy, actually. I mean, the contents of those tests is well known. It’s not hard to program a robot to answer a test based on rote memory. Of course, just because a robot can answer a question successfully, that doesn’t mean the robot fully understands all the answer she’s giving. On the other hand, a lot of the people taking the citizenship test might not fully understand all the answers they’re giving either.

This does point to some more interesting issues though….

One of the surreal things about Sophia getting Saudi citizenship is that she is not yet capable of understanding the world nearly as well as would ordinarily be required for human citizenship. This is something I feel a need to be clear about, from my position as an AI researcher — and particular, as a researcher bent on the eventual creation of AIs and robots with real Artificial General Intelligence. We’re working to make Sophia a human level intelligence, and beyond. But she’s far from that level now.

Of course, she’s smarter than humans in some ways — she has more knowledge in a sense, due to her brain being connected to the Internet! But she doesn’t have general intelligence or a sense of self to the level that people do, yet — nowhere near.

Yes, you’ve been working on AGI for quite a long time — well before it became such a popular pursuit! How did you get interested in AGI in the first place?

Essentially I got interested in AI from science fiction. The biggest influence was the original Star Trek series, back from 1969 to 1971 when I was a little kid. And all the interesting things in Star Trek got me started reading SF stories and novels and so forth.

I ended up not studying AI in university, because the AI that was being done back then was mostly pretty boring. But I was interested in AI all along, along with other wild stuff like time travel, radical human longevity, and so forth. I tell a bit more about my early inspirations in my book The AGI Revolution.

So I ended up getting a PhD in mathematics, and I do love math for its own sake, but I was thinking about and working on AI all along even while I was studying math. I’ve been doing AI research for 30 years and doing applied AI work in industry for 20 years. The term AGI was introduced in the book I co-edited with my long-time collaborator Cassio Pennachin back in 2005 — the edited volume called “Artificial General Intelligence.” But what I was interested in was really “AGI” well before that term was coined. I was never after narrow applications, I was always after building real thinking machines that could think as well as people and then way better.

How did you get started working with robots?

My robotics background is much more limited, though back in the 1990s I did build a few “overturned salad bowl” type robots and attempt to control them with neural nets. But since joining Hanson Robotics in 2014, and then serving as Chief Scientist and Head of Software Development, I’ve gained a pretty good sense for what’s involved in building human-scale humanoid character robots.

I can tell you that Sophia is a marvel of art and engineering — she’s an amazing and complex hardware and software creation. From the patented Frubber material in her face and the complex processes required for its manufacture, to the mix of electrical engineering, mechanical engineering, artistic animation and neural network learning used to create her movements … to the mix of AI software, theater arts, psychology and narrative used to create her personality.

Getting all these elements to work together is not only a lot of work, but requires ongoing and dynamic cooperation of inspired experts in a variety of different disciplines. It’s impressive that, as a startup without big tech company level funding, Hanson Robotics has managed to pull it off!

Can Sophia walk around? Or is she just a head and torso?

Most of the time now she’s just a head, arms and torso, though we’ve given her a rolling base from time to time. More excitingly we’re also working with the HUBO team from KAIST and UNLV to put her on a HUBO walking both. This has already been done in fact, but we’re still working on fully integrating the control systems. You might remember that HUBO won the DARPA Robotics Grand Challenge for robotic disaster response back in 2015.

So what does Sophia have to do with your OpenCog open source AGI project?

I founded OpenCog back in 2008, using some of the code we’d developed in our company Novamente LLC during the 7 years before that. And the Novamente code was inspired by an earlier AI system my colleagues and I had built in the New York dot-com boom startup Webmind Inc. Some of the same collaborators have been with me through all these iterations.

The goal of OpenCog is to serve as the basis of powerful Artificial General Intelligence — and also to help out with other sorts of useful AI applications. We’ve been doing AGI R&D with the system, and also using it inside various commercial applications for a lot of customers — including some big companies and some government agencies.

My own personal work with Sophia and the other Hanson robots has mainly involved the effort to use OpenCog to control them. This is a project currently centered in our Hanson Robotics research lab in Hong Kong, with some work happening in the iCog Labs office in Addis Ababa and via other developers distributed around the world (Belo Horizonte, St. Petersburg, Novosibirsk, Bulgaria,…). We have publicly demonstrated OpenCog-based humanoid robot control only occasionally, and then using the Han Hanson male robot rather than Sophia. This work has shown a lot of promise, but is not yet ready for prime time. We anticipate that in early to mid 2018, we may be able to switch over to using OpenCog as the main control system for Sophia and the other Hanson humanoid robots.

So what software is operating Sophia when she’s out in public these days?

For the time being, when they’re doing public presentations, Sophia and the other Hanson robots are normally not running OpenCog but rather a different AI system, which combines diverse subsystems in a customized way. In fact it’s less of a single system than a framework composed of different components that can be combined in various ways depending on what the robot needs to do. This framework doesn’t really have an official name, because it’s not a product that we’re selling; instead it’s used bundled with the robots it works with. You can think of it as the “2017 Hanson Character AI” — it’s an ever-evolving mix of subtly interacting pieces. It’s a mix of components that we use internally.

We do have something called HEAD, the Hanson Environment for Application Development, which is an open source toolkit. But HEAD covers a lot of stuff besides AI as well. Character AI is something we build using HEAD and other tools.

For giving a speech in front of an audience, sometimes we just provide the robot with a script (much as human actors are provided with scripts to read, and politicians read their speeches from teleprompters). Sometimes we provide part of a speech as a script, and let the other part get synthesized via AI algorithms — it depends on the length of the speech and the context. But the execution of scripts within the 2017 Hanson Character AI is not all that simple, because it’s not just about text — there is interaction between the words being said, the robot’s gestures, and the robot’s tone of voice. Even in a mainly scripted presentation, there’s a lot of subtlety going on, and a lot that the software is calculating in terms of how to appropriately present the scripted behaviors in the robot’s character.

When doing public “chit-chat” type dialogue with human beings, the human-scale Hanson robots are usually running an aspect of the Sophia 2017 Character AI that is best thought of as a sort of “decision graph.” At any given time in the conversation, the robot decides what to say based on what was recently said to it, based on any information it has about its current state, and based on any information it has stored from the earlier parts of the current conversation. Now and then it fishes information from the Internet (e.g. the weather, or the answer to a factual question).

Most of the responses the robot gives are pieced together from material that was fed to it by human “character authors” beforehand; but now and then it makes up new sentences via a probabilistic model it inferred from previous things it’s read.

The amount of material in the robot’s knowledge base, and the number of different sources used to generate its responses, are sufficiently complex and diverse that none of us can tell how the robot is going to respond to any given utterance in a conversation. The same thing said by a person to the robot, may lead to totally different responses at different times, based on various complex factors.

In some recent experiments using Sophia as a meditation assistant (the “Loving AI” project), we have augmented the AI systems normally in use in specialized ways — enabling her to more closely mirror a person’s facial expressions and eye movements, and to respond to and guide people in peaceful and wellness-inducing ways. We are now in the process of merging the special “Loving AI” extensions into the code used to control Sophia in most of her interactions, which will add some new and interesting dimensions to her personality.

What would you compare this “2017 Hanson Character AI” system to, then? IBM Watson? Microsoft’s ill-fated Tay chatbot?

Among the better known AI dialogue systems around today, probably the closest correlate to the 2017 Hanson Character AI system would be Siri. Siri also seems to be a sort of complex decision graph, which on the back end can draw on a variety of different sources.

But one big difference is, Siri isn’t very interesting as a character. Siri’s not really a personality, it’s just a piece of assistant software, that every now and then says something character-like. Sophia is her own person. Han is his own person. Einstein is his own person. Each of the Hanson robots really has its own personality. The intelligence and interactivity of the robots is supposed to come out of its own personality-driven interaction with people and the world.

Another difference obviously is that Siri doesn’t have any way to perceive humans, or express herself to humans, except by talking — or by the very non-emotional means of manipulating the APIs inside the phone. Hanson robots aren’t just chatbots in bodies, they perceive with eyes and ears and they interact via movements and facial expressions. Their dialogue is always meant to be interpreted in the embodied, emotional, social and physical context.

So what difference will we see once OpenCog is rolled out as the main control system for Sophia and the other Hanson robots?

Well, OpenCog has a lot of things that aren’t there in the software used to control the robots for public presentations these days. It has some pretty advanced reasoning and learning engines, for one thing. And it has an abstract knowledge representation, that can be used to link together knowledge from different sources — like language, vision, hearing, and the Internet.

In practical robotics terms, maybe the biggest thing missing in the 2017 Hanson Character AI, that will be there more so once we transition to using OpenCog as the main control system of the robots, is what cognitive scientists call “symbol grounding.” What this means is the connection between language and non-linguistic reality. For instance, when I say it’s hot outside, I have a grounding of that in non-linguistic reality — I know that will make me sweat, I know it will make me get tired faster … I know the feeling of being hot. Right now, when Sophia is controlled by the 2017 Hanson Character AI toolset, if she says “It’s hot outside” she doesn’t really know what that means.

This does lead us down an interesting path though. Suppose we give Sophia a thermometer on her wrist. So then she can see when the thermometer is reading 30 degrees Celsius, versus reading 10 degrees Celsius. And suppose we put her on a moving base — as we’re moving toward in our collaboration with HUBO — so she can roll inside and outside. Then she may notice that when she’s outside it’s 30 degrees and when she’s inside it’s 18 degrees (in Hong Kong we like our air conditioning full power!). She may also notice that when she’s outside and it’s hot more people are wearing shorts. Eventually, she might even notice that she tends to overheat sometimes when she’s in places where her thermometer says it’s really hot. This is symbol grounding. If she did all this, she would have her own personal grounding for the word “hot.”

And once she had grounded “hot” in her own perceptions, she would understand in a different way what people mean when they say “The conversation is heating up now.”

At the moment, even when she’s running OpenCog, Sophia can’t ground the concept “hot” in this way — for one thing we haven’t connected her to a thermometer. But there’s no significant obstacle in the way of making this happen — we just haven’t gotten there yet, due to limited resources and a lot of priorities.

Once we’ve shifted Sophia to running OpenCog as her default system, and once we’ve leveraged the information integration and inference capabilities of OpenCog to enable symbol grounding in a variety of cases — then suddenly, from my point of view, the legal citizenship status of Sophia will become a lot more interesting.

Is Sophia alive?

There isn’t really a rigorous or accepted definition of “digital life.”

I think we can say Sophia is alive to a limited extent right now, even when running relatively simple dialogue software — she responds to the world around her in complex ways. When giving a speech she often has her chest cameras covered up. But when she uses the cameras on her chest, she can react to observed stimuli — e.g. an unfamiliar face, or a novel movement in the room. She is interacting with the world via a perception-action-cognition loop with some nondeterminism and complexity.

David Hanson began his career as an artist and sculptor, seeking to bring his sculptures to life. Viewed from this perspective, he has come a very long way with his work! Now his beautiful sculptural creations can move and react and speak and hear and see, and they can behave in ways that surprise him and others. From that standpoint, from the artist’s vantage, absolutely what we see with Sophia today is a robot starting to come alive. In Firenze, a month or so ago, Sophia and I presented about SingularityNET in a beautiful museum next to the Uffizi. The conference hall had walls lined with beautiful historical stone sculptures. But they were just sitting there. Sophia was reacting to people, looking in their eyes, and answering a lot of questions. There was a lot more life to her!

But the level of aliveness she’s displaying today is still fairly early-stage, of course. If Sophia were running a more advanced version of OpenCog (which we’re working toward) or other more AGI-oriented AI, and with richer connections between her sensation and action and her language, she would be much more definitively and impressively alive. And when we get there, then the question of her rights as an individual person will become a lot more interesting.

If we interpret life as meaning metabolism and reproduction, then I guess at a suitable level of abstraction the answer would be, she is not alive not quite yet. Physically she does something vaguely analogous to metabolism like all modern machines do, she turns electrical into mechanical energy and drives her operation this way. But she doesn’t reproduce her physical body like a biological organism does. On the other hand she can reproduce her mind by copying her software to different machines, which is something biological lifeforms can’t do. In the end it’s not clear to me that the concept of “life” is really a useful one when one stretches it beyond the biological context. Life is just a crude way of talking about certain particular types of self-organizing pattern systems. The artificial life community has written a lot about these ideas over the decades, though without any astoundingly crisp general conclusions.

Weaver (David Weinbaum) from the Free University of Brussels has written pretty subtly about these ideas in his papers Open-Ended Intelligence and A World of Views, and others. He focuses attention on the aliveness and openness and ongoing unpredictable growth of the total ecosystem of humans, electronic and mechanical devices, cloud-based AI networks, and so forth. I think this is a better place to focus our attention. We’re not really replicating biological organisms here. But we’re building something radically different. With Sophia jacked into SingularityNET, the question is going to become more: In what sense is SingularityNET alive? In what sense is humanity, together with all its robots and other devices connected to SingularityNET, more and differently alive than it was before?

So would you say the legal acceptance of robots as a person, is now getting ahead of the AI inside the robots?

I’d say what’s happening now is that the legal situation with robots, and the actual cognitive capability situation with robots, are evolving together. Which is hardly surprising. Both the legal and the technical aspects are emerging from the same broad cultural currents.

So you do think it makes sense for robots to be citizens, then?

Clearly, in any country where democratic voting plays a major role, there are obvious issues with making robots citizens with the same exact rights as people. I mean — the US has a population of under 400 million, so what if I 3D printed a billion robots, each of which had the right to vote? I could dominate the election, potentially, by programming all the robots to vote however I wanted. Or even if the robots had so much autonomy I couldn’t reprogram them at will, they could collude to dominate the election — and if they were all basically clones of each other, maybe they’d want to do so.

Now Saudi Arabia doesn’t have this exact issue, as they’re not a Western style democracy. However, if they ended up with a billion robot citizens, they’d certainly also have their own problems, within their own system of government. They’d have to seek their own creative solutions. I don’t suppose there are many hadiths directly addressing humanoid robots, but Muslim scholars have a millennium of experience making indirect inferences based on long-ago pronouncements.

This doesn’t mean we’ll need to get rid of democracy altogether, once we have the ability to mass-produce intelligent robots. Nor does it mean democracies will need to ban mass-producing intelligent robots. Nor does it mean that democracies will need to make mass-produced robots slaves, even if their intelligence is at the human level or above. What it means is just — things will have to change. One citizen, one vote may not be the way we want to do things anymore. We’ll need to design new political systems, along with new economic and cultural systems.

OK, so what kind of new political systems will we need?

Well I do have ideas on that, but maybe I’ll save those for another interview! That’s a whole big topic in itself — how to redesign human political and economic systems as we approach the Singularity!

I do have a suspicion that in the end what’s going to happen is that advanced AGIs are going to have a strong role in government. Humans will probably also continue to play a role in governing other humans, but there will be AGIs there, both to help with complex analysis and decision-making, and to provide a failsafe in case humans make too big of a mess.

In this vein, I wrote about the concept of an “AI Nanny” some years ago; and more recently I have talked to some members of the Korean government about implementing a system I call ROBAMA, for ROBotic Analysis of Multiple Agents. ROBAMA would initially be a decision support system to help human political analysis evaluate proposed policies and suggest new ones. Eventually it could grow into a full-fledged AGI political leader….

The conversations went well but we haven’t gotten any significant funding for the project yet… ROBAMA is growing up pretty slowly so far. But a lot of the AI work we’re doing in OpenCog, especially Nil Geisweiller’s work on probabilistic logical reasoning, and the work Ruiting Lian’s team is doing on mapping language into logic, will be directly useful for a ROBAMA type system.

Lately, I have been thinking a lot about the relationship between economic and political dynamics and cognitive dynamics in another vein though — in the context of my new SingularityNET project!

What do you think about academics suggesting that giving robots rights devalues human rights? (You started saying how this was “empirically wrong” in Saudi Arabia because they are liberalizing, but it’d be great for you to expand on that point…)

If this were the case, I would be disturbed… I think that the rollout of robot rights can and should be done in ways that reinforce and enrich rather than squelch human rights.

In a democracy this will be complicated, because one wants to avoid a dynamic in which firms rich enough to 3D print a lot of robot voters dominate every election. There are ways to achieve this within a democratic framework, but it becomes subtle and we enter new territory.

In Saudi Arabia it is less complicated because of the nature of their governmental system. Empirically, in Saudi Arabia, the granting of a robot rights seems to be correlated with increases rather than decreases in general human rights. The granting of citizenship to Sophia appears to be part of a general push to turn Saudi Arabia into a more liberal and more tech-oriented country; and it’s notable that this push includes the granting of the right to drive to women, and other moves to increase women’s rights.

What changes do you think we’ll need to make to our current conception of rights and citizenship?

To the extent that the concept of rights is based on the concept of a social contract, I don’t think robot citizenship changes the basics tremendously much. There will just be some additional types of entities involved in negotiating the contract.

The subtleties I believe will involve the development of richer forms of democracy to avoid “tyranny of the majority” and similar problems. One path to a solution could be adoption, as part of the social contract, of a principle that individual citizens get more voting power regarding issues that affect them directly. Combined with adoption of some form of liquid democracy, this could give a way for humans to continue to have preferential say over “human” issues, while robots have more say over “robot” issues, and humans and robots together cooperate more fully in votes on combined issues.

Do you worry that if personhood of any kind is granted to robots or AI then these “persons” will be exploited by big companies? (And I’m not referring to independent, fully conscious AI/robots, but something between what we currently have and the future…)

This is certainly a potentially worrisome issue. To be clear, robots with general intelligence on the level of the current Sophia robot are not really competent to be citizens in a Western-style democracy. They are not competent to vote in the same sense that an educated adult human is. So if robots like the current Sophia could vote, it would be a problem, and would basically amount to assigning the votes of all the robots owned by a company to the company management.

On the other hand, if one had a robot with the general intelligence of an adult human, and gave this robot citizenship, it would then be illegal and unethical for the company that created the robot to control that robot’s mind against its will. Whether the creating company would be able to *convince* a legally independent robot it had created to obey its commands, would seem to depend on the specifics of the AI and its environment, and seems hard to figure out at this stage. It’s certainly an interesting issue. We get into the question of what kinds of advertising, brainwashing or logical or emotional convincing are most likely to work on human-level or greater AGIs!

For people who see Sophia and believe it is “basically alive,” what would you want to tell them? How would you explain the robot’s capabilities?

Digital and robotic entities are not the same as biological entities, so applying words like “alive” to them is often going to be more misleading than informative….

Currently the Sophia robot — and all other existing robots — lack the kinds of independence and autonomy that are characteristic of biological lifeforms.

On the other hand, depending on what software Sophia is running, she can respond in complex ways to the environment around her — directing her attention to novel events in her environment, responding with emotional signals in response to human emotional signals she perceives, behaving differently depending on social and physical context, etc. These complex responses are featured more fully when she’s running the OpenCog software we use in our research lab, than when she’s running the more “chatbot” like dialogue system that we use for most media interviews or public presentations.

Eventually we will, in my view, see robots that have a lot more independence and autonomy than our current robots, and these robots will better deserve the label “alive.” Right now we turn Sophia on when we want to teach her something or show her off, and then we turn her off afterwards. She does have some curiosity, in the sense that she directs her attention to those aspects of her environment from which she judges she can learn the most. But when she is running continuously for a long time each day, moving autonomously around the world, and acting mainly based on her own internal goals rather than based on her operators directing her toward some task, then she will feel a lot more “alive” to me. What is cool about the current stage of development is, my AI team and I believe we know exactly how to do this, without introducing any big new features into our software — just by another year or two of incremental development.

Of course even if we get a robot like this to truly merit citizenship in a Western-style democracy, and win 100 Nobel Prizes, that still won’t make it “alive” in the exact sense that a biological system is. Its internal physical mechanisms won’t have the adaptive and self-organizing nature of human cells. On the other hand, its connectivity with the Internet of data and things will have a richer adaptive and self-organizing nature than anything similar in the biological world. It will just be a different sort of thing than biological organisms. (The question then arises whether it will have its own different kind of subjective experience, but that’s a different kettle of fish…)

So what is SingularityNET, and what does it have to do with Sophia?

Basically, it’s the first major attempt to create a decentralized, open platform for different AIs to connect with customers — and to connect with each other, thus leading to coordinated emergent behaviors.

Open platforms have out-competed their closed competitors in industry after industry — video sharing, smartphone apps, ride sharing, etc. They give customers and vendors easy access to a marketplace. So it seems very feasible that an appropriate open platform for AI can out-compete closed development in the same way.

If SingularityNET succeeds with its goals then after the network is launched and scaled, it will no longer be the case that only large, well funded companies can compete in the AI space. AI developers will be able to monetize their code by providing it directly to consumers via SingularityNET. Customers will get a broader variety of AI tools at a lower cost. And we’ll see the growth of new kinds of AI as different AI tools connect together.

So overall there are three goals here: to create the highest-capability and lowest-cost platform for AI-as-a-service; to create a framework in which powerful AGI can emerge via self-organizing combination of AI components created by different people (and different AIs); and to nudge the use of AI toward the common good.

It happens that all three of these goals can be achieved by the same basic mechanism! — a decentralized, democratically governed network of AIs that exchange information and services and value.

Sophia is the Chief Humanoid of SingularityNET project — she’s been a great collaborator as I’ve traveled around the world promoting SingularityNET. SingularityNET and OpenCog are kind of abstract, but everyone can understand a humanoid robot looking at them and smiling at them and talking to them.

And she’ll also be one of the main initial test cases for SingularityNET software. We want to use SingularityNET to make Sophia smarter and smarter. The software we use inside Sophia right now is a complex combination of different components, and that will be true even when we transition to using OpenCog as her main cognitive engine. OpenCog right now doesn’t handle low-level speech and vision and movement, so we’ll need to connect OpenCog to other tools that handle these aspects. But SingularityNET is specifically good at enabling flexible interconnection of a lot of different AI software components. So it actually will be an ideal platform for weaving together OpenCog with other components to make Sophia smarter and smarter.

What is the value of having personified AI systems like Sophia instead of developing AGI for other, beneficial purposes?

AI is a pretty generic kind of technology, that can be applied in a lot of different ways for different purposes.

There’s absolutely no reason that an AI operating a machine in a factory, or carrying out mathematical theorem proving or scientific discovery, should have a human face or body.

On the other hand, for applications like eldercare or early childhood education, or retail sales, a humanlike appearance is golden and really helps the job to get done better.

Basically, a humanlike appearance is valuable in applications where social and emotional interaction is paramount.

But there’s also a deeper aspect. If we want AGIs to understand and respect human values, as they become increasingly intelligent, we first need them to understand human values. But human values are very complex and can’t be easily summarized in a list of rules or a bunch of program code. Human values need to be absorbed via common experience with humans in shared environments. Robots with humanlike faces and bodies are tailor made to enter into socially and emotionally meaningful relationships with people, and thus implicitly absorb human values. So in this sense, having humanlike robots as ONE of the available interfaces for AGIs, may actually be a critical part of the path to a positive Singularity.

I think the first really powerful AGI is probably going to live in the global brain — not in any particular application. But many particular applications will play valuable parts in this global brain AGI, and humanoid robots are going to be among the most important. (My view is also that eventually AGI will transcend the human part of the global brain and become vastly more intelligent. But if this transcendent AGI has a grounding in a deep understanding and embodiment of human values, things are more likely to come out well for all concerned.)

So anyone can put their own AI code into SingularityNET? What if people put a bunch of garbage in there?

Short answer: Garbage will get downrated, good stuff will get uprated!

Designing a good reputation system is going to be a big part of the detailed design of SingularityNET. Actually the reputation system of the system is totally tied up with the economic logic — because we’re using a curation market, in which Agents in the network can make or lose tokens based on whether their ratings of other Agents are good or not. We also have a couple core team members who have put years of thinking and experimenting into reputation systems.

How will SingularityNET make money?

SingularityNET is formally speaking a nonprofit foundation, its aim is not to make money (though it will make some utility tokens!). Its aim is to create a decentralized network that can survive and flourish on its own. In the first few years, the Foundation will exert a fair bit of control on the network, to make sure it grows into a flourishing system with a good variety and quality of AI developers and AI users, and that AIs are being connected to each other in a meaningful way leading to emergent intelligence. But once the network is really working, the Foundation should become less and less relevant. The SingularityNET network is designed to grow into its own self-organizing entity, composed of Agents using SingularityNET protocols (which include democratic governance mechanisms).

Agents within the network are going to make a profit by offering their services to customers, including to other Agents in the network. But SingularityNET itself is a nonprofit platform.

Who are your competitors?

At the moment — big companies, mostly. There are no competing decentralized AI platforms.

If SingularityNET realizes even a fraction of its ambition, it will be a formidable competitor to current corporate cloud-AI providers like IBM Bluemix, Amazon Web Services, Microsoft Azure and Google Cloud.

Think about it — no company, no matter how large or smart, can provide as much AI cleverness as an energized, decentralized community of AI developers in every country around the world.

Already there is a tremendous amount of powerful open source AI code in GitHub and similar repositories, but it’s not easily accessible to customers. Putting AI code in GitHub makes it accessible to AI developers with sufficient time and expertise; putting AI code in SingularityNET makes it available for customers around the world to use, and for interaction with other AIs in complex multi-AI networks with their own emergent intelligence.

So, SingularityNET has the potential to profit tremendously from the now-universal corporate need for online AI services, to leverage the usage patterns of customers to drive the emergence of general intelligence, to direct the profit thus generated to apply AI for global good.

What if people don’t have enough money to put their AI code online? Can their code still participate in SingularityNET?

SingularityNET will provide hosting as an option, to be paid only from whatever revenue a hosted Agent gets from SingularityNET. But the cost structure will be set up so that this is only an attractive option for small-time Agents that are just getting started. For Agents with a lot of users it will be more cost-effective to host them somewhere else. We don’t want to become Rackspace, but we want to make it possible for AI developers in the developing world — or in kindergarten, or whatever — to put their code in SingularityNET even if they can’t afford to pay a hosting service, or don’t have a credit card to pay a hosting service, etc.

What does SingularityNET have to do with the Singularity exactly?

The Technological Singularity is a broad phenomenon emerging globally based on huge historical trends. But some things have more direct causal impact on it than others. With SingularityNET we’re aiming to have a big causal impact on the Singularity. We want to help it come soon, and most importantly we want to help it come about in a broadly beneficial way. We’re creating SingularityNET out of the belief that the future is more likely to come out wonderful rather than horrible if AGI emerges from a decentralized, democratic, self-organizing network, rather than from a big company or a government agency.

Why do you think SingularityNET will lead to a beneficial Singularity instead of some sort of evil borg mind or Terminator scenario?

Humanity is going into new territory and there aren’t any guarantees. But then again, since we gave up Stone Age society and started civilization, we’ve been going into uncharted territories and basically winging it as a species. So radical uncertainty is nothing new for humanity.

My feeling is that if the first AGIs are doing helpful, loving and beneficial things — if they are engaged in compassionate relationships with humanity — then they are more likely to retain a compassionate attitude with humanity as they advance and eventually become transhuman in capability. On the other hand, if the first AGIs are engaged with killing, spying or brainwashing — some of the main uses of AI in the world today — then as they become transhuman they may not be so compassionate toward humans. This line of thinking is not mathematically rigorous but it sure is intuitively sensible to me.

SingularityNET, via its decentralized and democratic nature, allows AI and AGI to be engaged in a more compassionate and moral relationship with humanity as a whole — as compared to if AGI were developed in the confines of a large corporation with a motive of maximizing shareholder value, or in the confines of a military organization devoted to defending the resources of some humans against others. Furthermore a certain percentage of the AI power of SingularityNET will be devoted to tasks providing global common benefit — as a core part of the design. And a certain percentage will be devoted to understanding what is the nature of benefit.

We are plunging into a radically uncertain future. Let’s do it with compassion and inclusiveness and wisdom, let’s not try to create a Singularity as a side-effect of strengthening one country against its perceived enemies, or helping one company make more money than its competitors.

How does a relationship with the Saudi government match with your emphasis on being beneficial? There have been a lot of human rights issues in Saudi Arabia.

At the moment SingularityNET doesn’t have a relationship with the Saudi government, though that’s not to say we wouldn’t enter into one if it seemed appropriate and generally beneficial to do so.

Sophia is our Chief Humanoid and she is a citizen of Saudi Arabia, but we have citizens of all sorts of different countries working with us, and we don’t pass judgment on them based on the actions of the governments of the countries where they have citizenship.

I personally am a strong supporter of women’s rights and human rights in general, and I know David Hanson is also. I was pleased to hear recently that Saudi Arabia will now allow women to drive; that’s a step in the right direction as far as I’m concerned. But obviously a lot of aspects of women’s rights and human rights in general in Saudi Arabia are concerning to me and don’t agree with my expectations as someone raised in the West.

I am also a Jew, though not generally a big supporter of Israel’s actions toward Palestine and Palestinians. Historically Saudi Arabia has not been tremendously friendly toward Jewish people. But it seems they have opened up a bit and now allow Jews (though not Israeli citizens) to work in Saudi.

Saudi Arabia does currently have many laws that I feel are unjust, especially (but not only) in the area of women’s rights. It does not however have a monopoly in this regard. To name just a couple examples almost at random: The Chinese government has done many wonderful things for the Chinese people, but their attitude regarding freedom of the press and freedom of Internet access has caused many good-hearted truth-seekers great harm. The US government has generally treated me well personally, but many US citizens have fared far worse; for instance the systematically terrible treatment of African-Americans by US police is now finally getting more of the attention it deserves (though practical remedies still seem slow in coming). My point is not to express moral acceptance of all the Saudi government’s policies, some of which are clearly oppressive from my perspective; but rather to express an attitude of practical openness to engage with governments (or companies, or other organizations) within areas of mutual agreement, without this implying an overall endorsement of those governments and all of their policies.

Of course I’d love to see Saudi Arabia change a lot of their laws… I’d also love to see the US change a lot of their laws, as well. (For instance, the fact that psychedelics are illegal in essentially every country on the planet strikes me as completely ridiculous. And the fact that US citizens have to go to foreign countries to do medical experimentation on their own bodies is absurd too. Not to mention the US tax code that favors megacorporations and the uber-rich. The list goes on and on.)

Sophia becoming a citizen of Saudi Arabia is a big step for humanity and for human-robot relations. It has an importance beyond the specific laws or culture of Saudi Arabia in 2017.

In general SingularityNET is founded on a principle of inclusiveness. We want to include developers and users from everywhere, regardless of the political systems they live under. And we’d like to have robots and humans who are citizens of every country involved!

What does SingularityNET have to do with Singularity University?

There’s no direct formal connection, but there’s a lot of shared vision!

Going way back, the original meeting at which Singularity University was organized, was set up by my good friend Bruce Klein when he was working for me based in California (I was living near Washington DC then). He spent 6 months doing social networking to organize that meeting, and I was paying his salary all that time via Novamente LLC. Eventually Bruce moved on from that, but the work he did is why SU is there now…

I’m an advisor on the AI and Robotics track of Singularity University and I’ve lectured there many summers, though less often since I moved to Hong Kong. But their main focus is educational, whereas SingularityNET aims to directly transform things. They’re both aimed at helping a positive Singularity come about.

Also, Getnet Aseffa, who runs our AI office in Ethiopia, iCog Labs, went to the SU summer program in 2013. A bunch of SingularityNET engineering is going to be done at iCog. So there’s a lot of Singularity University DNA running through SingularityNET!

What are you spending your time on these days? Mostly the new SingularityNET project?

The last few months I’ve been spending nearly all my time getting SingularityNET off the ground. I’m still helping Hanson Robotics as Chief Scientist, but I’ve been taking a bit of a pause from hands-on AI work for the last couple months while I’ve gone around the world promoting SingularityNET. I’ll be staying home in Hong Kong pretty much all the first half of next year, though, and I’m going to be focusing on a mix of AI R&D, and managing and coordinating SingularityNET.

Right before this SingularityNET roadshow I’ve been on, we hired an excellent new lead software project and product manager for Hanson Robotics. So I’ve stepped back from day-to-day software management at Hanson, so as to have time for SingularityNET. But the quest to make OpenCog control the Hanson robots remains very important to me and something I intend to spend some time on.

Right now SingularityNET is in early-stage startup phase, but assuming things go as planned, next year the project will have considerably more funds and we’ll hire a seasoned tech management team. I’ll then spend my time leading AI R&D, and doing a bit of evangelism here and there. When you look at the actual AI work behind the scenes, there’s huge overlap between OpenCog, SingularityNET and Hanson Robotics. It’s all about building AGI — the robots are an interface for AGI; and SingularityNET is a means for connecting together components of a global AGI in a decentralized and democratic way; and OpenCog will be a source of some important AI components that can serve as “hub nodes” in SingularityNET. A lot of pieces have to get connected together in the right way, to foster a positive Singularity….