AGI, Open Source and Our Economic Future

One of the benefits of involvement with open source software projects is the opportunity it provides to interact with a huge variety of interesting people. My intersection with Dr. Linas Vepstas via our mutual engagement with the OpenCog open-source AGI project is a fine example of this. A few years back, Linas and I worked together on a commercial application of OpenCog’s natural language toolkit; and since then, as his professional career has moved on, Linas has continued to be a major contributor to OpenCog’s codebase and conceptual framework, particularly via his work maintaining and improving the link parser, an open source syntax parsing system originally developed at Carnegie-Mellon University, which is wrapped up inside OpenCog’s natural language comprehension system. Linas’s career started out in physics, with a PhD from SUNY Stony Brook for a thesis on “Chiral Models of the Nucleon,” but since that time he’s had a diverse career in software development, management and research, including multiple stints at IBM in various areas including graphics and firmware development, and CEO and CTO positions at startups. He was also the lead developer of a major open-source project, the GnuCash free accounting software, and has been very active in Wikipedia, creating or majorly revising over 400 mathematics Wikipedia articles, along with dozens of Wikipedia articles in other areas. Currently he’s employed at Qualcomm as a Linux kernel developer, and also participating in several other open source projects, including OpenCog. As he puts it, “I’m crazy about open source! Can’t live without it!” Beyond his technical work, Linas’s website contains a variety of essays on a range of themes, including sociopolitical ones. One of his persistent foci is the broader implications of open source, free software and related concepts; e.g. in “Is Free Software Inevitable?’ he argues that “Free Software is not just a (political) philosophy, or a social or anthropological movement, but an economic force akin to the removal of trade barriers, we can finally understand why its adoption by corporations promises to be a sea-change in the way that software is used.” Given the deep thinking he’s put into the topic, based on his wide experience, it seemed apropos to interview Linas about one of the topics on my mind these days – the benefits and possible costs of doing AGI development using the open source modality….. Ben: Based on your experience with OpenCog and your general understanding, what are the benefits you see of the open-source methodology for an AGI project, in terms of effectively achieving the goal of AGI at the human level and ultimately beyond? Linas: To properly discuss AGI, one must distinguish between “what AGI is today” from “what we hope AGI might be someday”. Today, and certainly in the near term future, AGI is not unlike many other technology projects. This means that the same pros and cons, the same arguments and debates, the same opinions and dynamics apply. Open source projects tend to move slower than commercial projects; they’re often less focused, but also less slap-dash, sloppy and rushed. On the other hand, commercially valuable projects are well-funded, and can often be far more sophisticated and powerful than what a rag-tag band of volunteers can pull together. Open source projects are usually better structured than university projects, for many reasons. One reason is the experience level of the programmers: a grad student may be smart; a professor may be brilliant, but in either case, both are almost surely novice programmers. And novices are, as always, rather mediocre as compared to the seasoned professional. When open source projects are big enough to attract experienced programmers, then the resulting focus on quality, maintainability and testability benefits the project. All projects, whether closed-source, university, or open source, benefit from strong leadership, and from a team of “alpha coders”: when these leave, the project often stalls. When a grad student leaves, whatever source code they wrote goes into almost immediate decline: it bit-rots, it doesn’t run on the latest hardware and OS’es, and one day, no longer compiles. Open source projects have the breadth of interest to keep things going after the departure of a leader programmer. These kinds of points and arguments have been made many times, in many places, during discussions of open source. Many more can be made, and they would generically apply to OpenCog. Ben: Yeah, there are many similarities between open-source AGI and other OSS projects. But it seems there are also significant differences. In what ways would you say an AGI project like OpenCog differs from a typical OSS project? Linas: In many ways, OpenCog is quite unlike a typical OSS project. Its goals are diffuse and amorphous. It’s never been done before. Most open-source projects begin with the thought “I’m gonna do it just like last time (or just like project X), except this time, I’ll make it better, stronger, I’ll do it right.” AGI has never been done before. The “right way” to do it is highly contested. That makes it fundamentally researchy. It also makes it nearly impossible for average software programmers to participate in the project. There’s simply too much theory and mathematics and too many obscure bleeding-edge concepts woven into it to allow the typical apps programmer to jump in and help out. Consider an open-source project focused on music, or accounting, or database programming, or 3D graphics, or web-serving. There are hundreds of thousands of programmers, or more, with domain expertise in these areas. They’ve “done it before”. They can jump into an open-source project, and be productive contributors within hours if not minutes. The number of experienced AGI programmers is miniscule. There’s no pool to draw from; one must be self-taught, and the concepts that need to be learned are daunting, and require years of study. Neither OpenCog, nor any other open source AGI project, is going to derive the typical benefits of open participation because of this. Perhaps OpenCog might be comparable to gcc or the LLVM compilers: the theory of compilation is notoriously complex and arcane, and pretty much requires PhD-level experience in order to participate at the highest levels. On the other hand, the theory of compilers is taught in classes, and there are textbooks on the topic. This cannot be said for AGI. If OpenCog seems to be slow compared to some other open-source projects, these are the reasons why: it is fundamentally difficult. More interesting, though, perhaps, is to ask what other projects should an open-source AGI project be compared to? Clearly, there are some other overtly AGI-ish projects besides OpenCog out there, and these are more or less directly comparable. But what about things that aren’t? A self-driving car is not an AGI project, but certainly solves many of the problems that an AGI project is interested in. The funding, and development staff, for such a project, is certainly much much larger than that for AGI projects. What about automatic language translation? AI avatars in video games? Might such a project “accidentally” evolve into an AGI project, by dint of cash-flow? When one has (tens of) thousands of paid employees, management, and executives working towards some end, it is not uncommon to have skunk-works, research-y, high-risk projects, taking some pie-in-the-sky gamble pursuing some wild ideas. Personally, I believe that there is a very real possibility that one of these areas will sprout some AGI project, if not soon, then certainly in the upcoming years or decade. Generalizing the AI “problem” is just too commercially valuable to not take the risky plunge. It will happen. And, mind you, the “winner take all” meme is a powerful one. Ben: Hmmm…. I see what you’re saying but yet I have to say I’m a bit skeptical of a narrow-AI OSS project “turning into AGI.” I tend to think that AGI is going to require a large bit of work that is specifically AGI-focused rather than application-focused. Thinking about OpenCog in particular – I can see that once it gets a bit further along, one could see some extremely interesting hybridizations between OpenCog and some application-focused OSS projects … I can certainly see a lot of promise there! But that would be leveraging OpenCog work that was done specifically with AGI in mind, not wholly under the aegis of pursuing some particular application. Linas: What I meant to say was corporations with large narrow-AI projects are likely to have small (secret/proprietary) skunk-works AGI projects. Ben: Yes, that seems plausible. But I suppose these are very rarely open-source projects, and then generally if they fail to bear dramatic fruit quickly, they’ll probably end up being dropped or advancing extremely slowly in researchers’ spare time. Without naming any names, I know of a couple AGI skunkworks projects in large corporation that fit exactly this description. They’re good projects led by great researchers, but they don’t get much time or resource allocation from their host companies because they don’t show promise of leading to practical results quickly enough. And because they’re not open source they can’t easily take advantage of the community of AI programmers outside the host company. Linas: Indeed, my experience with skunk-works projects are that they are very unlikely to be open source, and are unlikely to ever see the light of day as a product or a part of a product; even if an impressive demo or two is created, even if some executive shows interest in it for a while, they tend to dry up and wither. There are occasional exceptions; sometimes the wild idea proves strong enough to become a real-life product. So all I meant to say is that any corporation with more than a few dozen programmers is going to have some secret lazy Friday afternoon project, and in bigger companies, these can even become full-time funded projects, with 2,3,5 or 6 people on them. They go on for a while, and if successful, they morph. I figure the same dynamic happens in narrow-AI companies, and that the engineers at a narrow-AI company are likely to take some random stab at some vaguely AGI-like project. But no, not open source. Ben: I think AGI is a bigger problem than skunk-works efforts like that will be able to address, though. So if a big company isn’t willing to go OSS with its skunk-works AGI project, then it will need to substantially staff and fund it to make real progress. And so far as I can tell that hasn’t happened yet. Although, the various AGI skunk-works projects in existence probably still advance the state of the art indirectly in various ways. Linas: Yes, exactly Ben: Now let me shift directions a bit. Some people have expressed worries about the implications of OSS development for AGI ethics in the long term. After all, if the code for the AGI is out there, then it’s out there for everyone, including bad guys. On the other hand, in an OSS project there are also generally going to be a lot more people paying attention to the code to spot problems. How do you view the OSS approach to AGI on balance — safer or less safe than the alternatives, and why? And how confident are you of your views on this? Linas: Here, we must shift our focus from “what AGI is, today”, to “what we believe AGI might be, in the future”. First of all, lets be clear: as history shows, the “bad guys” are always better funded, more organized, and more powerful than Robin Hood and his band of Merry Men. Were Robinhood to create something of material value, the bad guys would find some way of getting their fingers on it; that’s just how the world works. We could be talking about criminal gangs, corrupt cops, greedy corporations, rogue foreign governments, or, perhaps scariest of all, the paranoia of spy agencies out of control. The paranoia is justified; the Soviets used wire-tapping to keep the population under control, the Nazis used networks of informants, the current Iranian government uses the Republican Guard quite effectively, and J. Edgar Hoover exercised some extraordinary powers in the US. Lets not mention McCarthy and the Red Menace. It happens all the time, and it could happen anywhere, and it could happen in the US or in Europe. Some have argued that steps down this slippery slope have already been taken; few sleep better at night because of Homeland Security. Let me be clear: I really don’t think the “bad guys” are going to be some terrorists who use open-source AGI tech to unleash a holocaust against humanity. If there’s one thing I do trust Homeland Security for, that is to catch the terrorists. They may be expensive and wasteful and coldly unfeeling, but they’ll (probably) catch the terrorists. Even in the most optimistic of circumstances, one might imagine a “benevolent” capitalistic corporation developing some advanced technology. Capitalism being what it is, this means “making money”, even if it has deleterious effects on the environment, on employment, or, even “benignly”, on customer satisfaction. Apple totally and completely dictates what can be found on the Apple app store: you cannot sue, you cannot appeal, you cannot in any way force them to open it up: there is no case law to stand on: you have no rights, in the app store. Apple is not alone: Facebook has leaked any amount of information, Hotmail accounts have disappeared as if they’ve never existed, Amazon has deleted books from Kindles even though the customer had already paid for, and “owned” them. AT&T has silently and willingly gone along with warrantless wiretaps. Are you sure you want to entrust the future of AGI to players such as these? Ben: Obviously I don’t, which is one of the many reasons I’m enthused about OSS as a modality for AGI development. Linas: And the intersection of AGI with the modern economy may have some much more dramatic consequences than these. The scariest thing about AGI, for me, is not the “hard takeoff” scenario, or the “bad guys” scenario, but is the economic dislocation. Is it possible that, before true AGI, we will have industrial quasi-AGI reshaping the economy to the point where it fractures and breaks? There’s an old, old worry that a “robot will take my job”. In American folklore, John Henry the Steeldriver lost to a steam-powered drill in the 1840’s, and its been happening ever since. However, the industrialization of AGI is arguably of a magnitude greater than ever before, and massive economic dislocation and unemployment is a real possibility. Basically, I believe that long before we have “true AGI”, we will have quasi-AGI, smart but lobotomized, feeling-less and psychotic, performing all sorts of menial, and not so menial, industrial tasks. If a robot can do it for less than $5/hour, a human will lose that job. When robots get good at everything, everyone will loose their job. They don’t have to be “classic AGI” to be there. What makes this a tragedy is not the AGI, but a political system that is utterly unprepared for this eventuality. The current rhetoric about welfare, taxes and social security, and the current legal system regarding private property and wealth, is I believe, dangerously inflexible with regards to future where an industrialized, lobotomized almost-but-not-quite-AGI robot can do any job better, and cheaper, than a human, and the only beneficiaries are the owners and the CEO’s. Times are already tough if you don’t have a college education. They’ll get tough even for those who do. A 10% unemployment rate is bad. A 50% unemployment rate will result in bloodshed, civil war, and such political revolt that democracy, and rule of law, as we currently know it in America, will cease to exist. Optimistically, everyone would be on the dole. Realistically, all of the previously charted paths from here to there have lead through communism or dictatorships or both, and I fear both these paths. Ben: Certainly, where these practical economic implications are concerned, timing means a lot. If it’s 3 years from the broad advent of “quasi-AGI” programs that relieve 80% of humans from their jobs until the advent of truly human-level AGI, that’s one thing. If it’s 10 years, that’s another thing. If it’s 30 years, that’s another thing. But I wonder how OSS will play into all this. One obvious question is whether the potential for economic dislocation is any more or less if the first very powerful quasi-AGIs are developed via an OSS project rather than, say, a proprietary project of a major corporation? Or will the outcome will be essentially the same one way or another? Conceptually, one could make a case that things may go better if the people are more fully in charge of the process of their own gradual obsolescence-as-workers. But of course, this might require more than just an open-source quasi-AGI framework, it might require a more broadly open and democratic society than now exists. This brings to mind RU Sirius’s intriguing notion of an Open Source Party, bringing some of the dynamics and concepts of OSS and also of initiatives like Wikileaks to mainstream politics. Plenty to think about, indeed!


  1. I am an applied mathematics PhD student working on visual cognition and machine vision. How can I participate in projects like these, and what are the ways to pursue jobs in AGI for organizations that are not large and consumption-driven?

    Finding advice on how to work in AGI is unbelievably difficult. I don’t know for sure, but I think I am probably exactly the kind of rising grad student that AGI projects would want (math background, machine learning, probability, and psychology, with programming experience). But every AGI project I read about seems exceedingly “closed doors.” Any advice for getting past this barrier?

  2. Thank for yours efforts. I can only wish there were more documentation for opencog.

    • I am an applied mathematics PhD student working on visual cognition and machine vision. How can I participate in projects like these, and what are the ways to pursue jobs in AGI for organizations that are not large and consumption-driven?

Leave a Reply