Sign In

Remember Me

The Benefits of Open Source for AGI

The open source approach to software development has proved extremely productive for a large number of software projects spanning numerous domains – the Linux operating system (which powers 95% of webservers hosting the Internet) is perhaps the best-known example. Since the creation of powerful Artificial General Intelligence is a large task that doesn’t fit neatly into the current funding priorities of corporations or government research funding agencies, pursuing AGI via the open-source methodology seems a natural option.

Some have raised ethical concerns about this possibility, feeling that advanced AGI technology (once it’s created) will be too dangerous for its innards to be exposed to the general public in the manner that open source entails. On the other hand, open source AGI advocates argue that there is greater safety in the collective eyes and minds of the world than in small elite groups.

I’m hardly a neutral party in this matter, being a co-founder of a substantial open-source AGI project, OpenCog, myself. And furthermore, my interviewee here, Dr. Joel Pitt, is one of my closest colleagues in the OpenCog project – the leader of Hong Kong Polytechnic University’s current project applying OpenCog to create intelligent game characters, and my collaborator on various AI projects off and on since 2001. While we each have our individual orientations, through long years of dialogue we have probably come to understand the relevant issues considerably more similarly than a random pair of AGI researchers. So this interview is best viewed as a (hopefully) clear presentation of one sort of perspective on the intersection between open source and AGI.

Joel has a broad background in science and engineering, with contributions in multiple areas beyond AI, including bioinformatics, ecology and entomology. His PhD work, at Lincoln University in New Zealand, involved the development of a spatially explicit stochastic simulation model to investigate the spread of invasive species across variable landscapes. He is also a former Board member of Humanity+, and co-founded a startup company (NetEmpathy) in the area of automated text sentiment classification. In 2010 he won the student of the year award offered by the Canadian Singularity Institute for Artificial Intelligence.

Ben:

What are the benefits you see of the open-source methodology for an AGI project, in terms of effectively achieving the goal of AGI at the human level and ultimately beyond? How would you compare it to traditional closed-source commercial methodology; or to a typical university research project in which software code isn’t cleaned up and architected in a manner conducive to collaborative development by a broad group of people.

Joel:

I believe open source software is beneficial for AGI development for a number of reasons.

Making an AGI project OSS gives the effort persistence and allows some coherence in an otherwise fragmented research community.

Everyone has their own pet theory of AGI, and providing a shared platform with which to test these theories I think invites collaboration. Even if the architecture of a project doesn’t fit a particular theory, learning that fact is something that is valuable to know along with where the approaches diverge.

More than one commercial project with AGI-like goals has run into funding problems. If the company then dissolves there will often be restrictions on how the code can be used or it may even be shut away in a vault and never be seen again. Making a project OSS means that funding may come and go, but the project will continue to make incremental progress.

OSS also prompts researchers to apply effective software engineering practices. Code developed for research often can end up a mess due to being worked on by a single developer without peer review. I was guilty of this in the past, but working and collaborating with a team means I have to comment my code and make it understandable to others. Because my efforts are visible to the rest of the world there is more incentive to design and test properly instead of just doing enough to get results and publish a paper.

Ben:

How would you say the OpenCog project has benefited specifically from its status as an OSS project so far?

Joel:

I think OpenCog has benefited in all the ways I’ve described above.

We’re fortunate to also have had Google sponsor our project for the Summer of Code in 2008 and 2009. This initiative brought in new contributors as well as helped us improve documentation and guides to make OpenCog more approachable for newcomers. As one might imagine, there is a steep learning curve to learning the ins and outs of an AGI framework!

Ben:

In what ways would you say an AGI project differs from a typical OSS project? Does this make operating OpenCog significantly different from operating the average OSS project?

Joel:

One of the most challenging aspects of building an OSS AGI project, compared to many other OSS projects, is that most OSS projects have a clear and simple end use. A music player plays music, a web server serves web pages, and a statistical library provides implementations of statistical functions.

An AGI on the other hand doesn’t really reach its end use until it’s complete, which may be a long time from project inception. Thus the rhythm of creating packaged releases, and the traditional development cycle, are less well defined. We are working to improve this with projects that are applying OpenCog to game characters and other domains, but the core framework is still a mystery to most people. It takes a certain level of time investment before you can see how you might apply OpenCog in your applications.

However, a number of projects associated with OpenCog have made packaged releases. RelEx, the NLP relationship extractor, and MOSES, a probabilistic genetic programming system, are both standalone tools spun out from OpenCog. And we do tentatively plan for an OpenCog 1.0 release for sometime around the end of 2012 – that will be an exciting milestone, even though a long way from our ultimate goals of AGI at the human level and beyond. That’s part of our broader roadmap, which sketches a path from the current state of development all the way to our end goal of self-improving human-level AGI, which we hope may be achievable by the early 2020s.

Ben:

Some people have expressed worries about the implications of OSS development for AGI ethics in the long term. After all, if the code for the AGI is out there, then it’s out there for everyone, including bad guys. On the other hand, in an OSS project there are also generally going to be a lot more people paying attention to the code to spot problems. How do you view the OSS approach to AGI on balance — safer or less safe than the alternatives, and why? And how confident are you of your views on this?

Joel:

I believe that the concerns of OSS development of AGI are exaggerated.

We are still in the infancy of AGI development and scare-mongering by saying that any such efforts shouldn’t happen won’t solve anything. Much like prohibition, making something illegal or refusing to do it will just leave it to more unscrupulous types.

I’m also completely against the idea of a group of elites developing AGI behind closed doors. Why should I trust self-appointed guardians of humanity? This technique is often used by the less pleasant rulers of modern-day societies: “Trust us – everything will be okay! Your fate is in our hands. We know better.”

The open-source development process allows developers to catch one another’s coding mistakes. When a project reaches fruition, it typically has many contributors and many eyes on the code can catch what smaller teams may not. However, it also allows other Friendly AI theorists to inspect the mechanism behind an AGI system and make specific comments about the ways in which Unfriendliness could occur.

When everyone’s AGI system is created behind closed doors, these sorts of specific comments cannot be made, nor proven to be correct.

Further, to a great extent the trajectory of an AGI system will be dependent on the initial conditions. Even the apparent intelligence of the system may be influenced by whether it has the right environment and whether it’s bootstrapped with knowledge about the world. Just like having an ultra intelligent brain sitting in a jar with no external stimulus will be next to useless, so will a seed AI that doesn’t have a meaningful connection to the world… (despite potential claims otherwise I can’t see seed AI developing in a ungrounded null-space).

I’m not 100% confident of this, but I’m a rational optimist. Much as I’m a fan of open governance, I feel the fate of our future should also be open.

Ben:

So what is our future fate, in your view?

Joel:

That’s a good question, isn’t it. When will the singularity occur? … would be the typical question the press would ask so that they can make bold claims about the future

But my answer to that is NaN. ;-)

(For the non-programmer reader: NaN means “Not a Number”, a commonly obtained error message.)

Ben :

You reckon we just don’t have enough knowledge to make meaningful estimates, so we just need to be open to whatever happens and keep moving forward in what seems like a positive way?

Joel :

Indeed.