Land of Fire, Ice and Thinking Machines: The Recent Rise of AI in Iceland, and an Interview with the Thorisson Brothers who Helped Make it Happen

This year I’m dividing my time between Hong Kong and DC, and one of the advantages of this somewhat exhausting lifestyle is the opportunity it gives to stop interesting places en route between the two cities.   This past August, for example, I spent nearly a week in Iceland – the beautiful, rugged “land of fire and ice” — where I not only hiked on lava-coated glaciers and bathed in silica-mud-filled hot springs, but also attended the AGI & Constructivist AI Summer School organized by Kristinn Thórisson and his colleagues at Reykjavik University.   The summer school was a joint initiative between Reykjavik University and the recently created Iceland Institute for Intelligent Machines (IIIM).

Reykjavik University, home of the 2012 AGI and Constructivist AI Summer School

The author experiencing some ice in Iceland

I was particularly interested in this Summer School because it was, in a sense, a follow-up to the 2009 AGI Summer School that I organized at Xiamen University in China.   The Icelandic AGI summer school had a different slant, and a location in a different corner of the world, but a similar purpose: to spread the meme of AGI, and to give students and researchers the chance to absorb conceptual and technical information about AGI directly from AGI researchers, face to face.  The annual AGI conferences provide an opportunity to sample the range of contemporary AGI research, but a summer school gives a change to drill deeper into the fundamentals, and into a sampling of specific approaches.

The Iceland AGI / Constructivist AI Summer School is just the latest phase in the recent growth of AI R&D in Iceland – a growth that to a notable extent are attributable to two AI researchers who happen to be brothers, Kristinn  and Hrafn Thórisson.    In this article I’ll review a bit of the recent history of AI in Iceland, and then present an interview I did with Kristinn and Hrafn, where they go into a bit more juicy detail on their own backgrounds and ideas, as well as the rise of Icelandic AI.

The Rise of Icelandic AI

Iceland is a relatively small country population-wise, with only around 300,000 people.  So it’s not entirely shocking that, until recently, there wasn’t a lot of AI R&D going on in Iceland.  In 2005, for example, there was only one university AI course offered, and public awareness of AI was weak.  Most of the AI in Iceland before this date was in the commercial sector, consisting of fairly narrow — though sometimes quite successful — practical AI applications.

Among the commercial success stories, Ossur, founded in 1971, has since become a pioneer in areas of bionic prosthetics, employing over 16,000 people at 14 different locations. Their business utilizes advanced AI software techniques to aid in designing their products, and produces intelligent software to power the products themselves: an example is the Rheo Knee whose inbuilt AI software continuously learns and adapts to its user’s walking style.

Another Icelandic AI company that has excelled on an international level is Marel. In 1980 they began developing unique food-processing equipment that utilized  software for analyzing, grading and processing food. Modern versions of Marel equipment contain a small arsenal of AI technologies and robotics systems.

Smaller Icelandic AI companies founded in this same period included Gavia, creators of autonomous underwater vehicles and Hex software, pioneer of Icelandic-speaking speech synthesizers and speech recognition software.

The emergence of AI as a major research field in Iceland, beyond these scattered commercial applications, dates largely to the creation of two organizations in 2005:

  • The Center for Analysis and Design of Intelligent Agents (CADIA) [http://cadia.ru.is]. Iceland’s first AI research lab, founded by Kristinn R. Thórisson and Yngvi Björnsson at Reykjavik University (RU)
  • Icelandic Society for Intelligence Research (ISIR) [http://isir.is]. Iceland’s first society for artificial intelligence, founded by Hrafn Thórisson, Freysteinn Alfredsson, Arnar Óskarsson and Agust Holmgeirsson

Among the many individuals helping with these efforts, the energy put in by the Thórisson brothers, Kristinn and Hrafn, was notable.  At that time, Kristinn had just returned to Iceland from New York, having received his PhD from the MIT Media Lab eight years earlier, and Hrafn was still pursuing his education. The two undertook a concerted effort to bring AI to life in Iceland, which has paid off dramatically.

The list of CADIA’s accomplishments is impressive, including:

  • Collaborating with HONDA on developing software for the ASIMO robot –one of the world’s most advanced humanoid robots, 2007
  • Began a collaborative effort on development of intelligent software for the largest unified online game in the world (the game “EVE Online” by CCP Games), 2007
  • Developing general game-playing (GGP) AI that won the World GGP Championship in 2007 and 2008 (against institutes with years of experience in the field)
  • Collaborating in a project named by Science Magazine’s Top 10 most important discoveries of 2007; solving Checkers by creating an intelligent system that can calculate all possible moves.
  • Received a prestigious EU grant for the HUMANOBS project in 2009 — the largest grant ever awarded to a computer science research project in Iceland

All this led up to the current phase of AI research in Iceland – the creation of the IIIM as a powerful and well-funded AI research institute (www.iiim.is), and – as highlighted at the recent summer school — the emergence of “constructivist AI” as a unique approach to developing artificial general intelligence.

Constructivist AI

So what is this “constructivist AI” I keep mentioning?   In essence, it has to do with building AI systems that learn how to build themselves and modify themselves, rather than relying as heavily as other AI systems on human programming.  In the constructivist approach, the job of the programmer is just to set up the basic equations governing the system’s self-organization, its ongoing self-creation.  Then the system, exposed to an appropriate environment from which it receives ongoing perceptions, carries out actions and reasons about their implications, relying on a vast number of predictions and inferences. The processes constitute self-organizing activity that producing  intelligent structures and behavior dynamics as an emergent property of the operation of the whole system.

Many AI approaches have this conceptual aspect to them – nearly all AGI researchers would agree that a human-level intelligence, at some level, needs to self-organize its own cognitive structures and dynamics based on its experiences.  But most approaches advocate building more into a system, in order to augment, constrain and guide the self-organization.  Constructivist AGI, as practiced by Kristinn Thórisson and his colleagues, pushes the envelope in the minimalist direction, wanting to leave as much as possible up to self-organization and self-construction.

At the AGI & Constructivist AI Summer School, Reykjavik researcher Eric Nivel presented the Replicode language and AI framework, which he has developed with Kristinn over the last few years as part of  their FP7 HUMANOBS project.  Students at the summer school learned to program in the Replicode language, creating simple self-reconstructing AI code and concretizing the abstract ideas of constructivist AI.

The constructivist AI paradigm is still at an early stage, and doesn’t yet equal the practical achievements of some other approaches to AI.   However, as a research direction, I find it fascinating due to its extreme nature.  It is a serious, well-thought-out attempt to pursue the question: Can we get human-level intelligence to emerge from a large collection of simple entities, interacting and modifying each other, as they collectively interact with an appropriate world, if we conceive of and implement an appropriate integrating architecture for their operation?

Interview with Kristinn and Hrafn Thórisson

As well as attending the AGI and Constructivist AI Summer School in Reykjavik, I recently had the opportunity to ask Kristinn and Hrafn Thórisson, the brothers responsible for a significant portion of Iceland’s recent surge in AI R&D, a few questions about their own work and also their vision for the future of AI and its implications.

Kristinn Thórisson

Ben: How did you two get interested in AI in the first place?  Why choose AI over some other field of research?  Did you have any early mentors in the AI field?  If AI wasn’t your initial career plan, what were your plans before that?

Kristinn: From an early age I was greatly intrigued in science and technology – not sure why, but there it was. Very early on I had this intuition that computers and robots would be really important in the future, and I wanted to work on something that was important. I thought the coolest technology of all – and at the same time most mysterious – was the concept of intelligent robots.

I come from a family of non-techies, so this technology and science interest was somewhat out of place – I certainly did not discuss robots very much with family members. They are quite religious, actually, and the thought of making an intelligent machine may have seemed strange to them at the time.

I remember sitting one day on a bus in Reykjavik, on my way home from school, thinking about robots and it hit me: If thought is a physical process then it should be possible to build an artificial mind! Making robots that can talk and think and do stuff seemed the obvious way forward for almost any human activity. With the ultimate cool factor! From then on my interests drifted increasingly in the direction of the most promising technologies for achieving that goal, computer science, programming, cameras for eyes, microphones for ears, etc. Interestingly for a long time, until fairly late in my twenties, I was absolutely certain that by the time I would be educated enough on the key topics I would be too late to the game: Surely, with such an important technology, someone would already have built the robot of the future well before I was educated enough to even start contributing. To this day it still surprises me how wrong I was on that assumption. But I am very happy about it, because this is an enormously interesting subject to study. I consider myself quite lucky to be able to contribute to the field.

Hrafn Thórisson

Hrafn: Like Kristinn I was always interested in mechanical things. To my parents’ woes I think most of my toys ended up on the dissecting table and were turned into something else entirely. Perhaps more brought on by Indiana Jones (boobietraps & contraptions) than by sci-fi at the time. Although Kristinn was away in the US most of my childhood; I couldn’t help but be inspired by his work in robotics and artificial intelligence. In fact, if you search for a video online named “Gandalf MIT 1996” you can see me as a kid testing his conversational agent!

However, I began my work in AI years later when I had an idea of how creativity might be approached in machines. To my joy Kristinn was moving back to Iceland at a similar time which did a lot to kindle the flames; encouraging me, advising and pointing to good books to read. In a very short amount of time I learned programming, implemented an emergent system and to my own surprise won Iceland’s National Young Scientist Competition, going on to represent Iceland in the international competition. Later that same year I published my first paper on artificial creativity at an international conference. One thing led to another until I was leading a social movement here in Iceland, encouraging the growth of artificial intelligence; and meanwhile Kristinn was doing amazing things to rally the academic community to embrace AI as a serious research subject.

Ben: What do you think are the biggest obstacles facing the AGI field at the moment?  And what are the biggest unanswered questions?

Kristinn: The biggest problem is researchers ignoring the systems view of intelligence: Academics and industry alike build systems that solve parts of the intelligence puzzle that, while successful on some narrow tasks, leave out critical parts of what makes intelligent systems intelligent. Because these systems have been engineered in isolation there is no way to expand them with more functionalities or connect them to other systems that are necessary for bringing their operation to a higher level of intelligence.

Hrafn: Lack of collaboration. Tearing loose from old habits in how to pursue AI; these are legit concerns. But what concerns me is the low level of attention being paid to artificial creativity in its own right (invention, problem solving, etc.). Little progress has been made in understanding what creativity is, why- and how it evolved. Creativity and imagination are often times treated as a byproduct of intelligence or logic, while the fact is that it’s an interconnected part of the overall system. Attempts to uncover creativity’s roots or explore less complex organisms capable of creative activities are scarce. Thankfully there are a few who dare venture over the fences of anthropocentric, classical approaches to this subject. For surely, as an inherent part of intelligence, clues to creativity’s origins are evident in other species.

It’s obvious to me that to build a broadly-creative system its design will be an AGI. And conversely; to build an AGI system it’s of vital importance that creativity be taken into account as an integral part of it. So my answer in short is: How general creativity works & how it relates to general intelligence is one of the largest unanswered questions & obstacles.

Ben: If you were given $100 billion in research funding, to spend on AGI according to your own wishes, how long do you think it would take you to get to a human-level thinking machine?   How much of the money would you expect to be left over?

Kristinn: It would take somewhere between 1 and 2 decades, it would produce a higher-than-human intelligence, and there would be at least $75 billion left to spend on mojitos.

Hrafn: I would give the funds to Kristinn and wait for the mojitos!

Ben: You mention the company Ossur ( www.ossur.com ) …. What kind of AI technology do they use?

Kristinn: They use real-time temporal pattern classification techniques to detect what the user of an artificial ankle/foot system is likely to be doing – walking? Running? Going up stairs? – So that the limb can adapt automatically to their style of movement at any point in time.

Ben: Have any of the Icelandic AI companies found markets for their AI software/services outside Iceland?

Kristinn: There are a few – I know of a one-man company that a while back designed the fraud detection system for VISA Europe, a massive implementation of ANNs running on a giant cluster, tracking real-time transactions and looking for patterns indicating illegal behavior. And Össur and Marel, two Icelandic companies that are highly successful in international markets, are examples of perhaps only a handful of companies that rely on some level of AI technology in their products.

Ben: If you were advising a young researcher just starting in the AGI field, what advice would you give them?  And what questions would you suggest they spend their evening and weekend spare time thinking about?

Kristinn: First I would tell them that the next 20-30 years are going to be the most exciting that the field of AI has ever seen – and that their intelligence will probably be matched and surpassed by an artificial one before they grow old. They should spend their time thinking about how system architecture – ‘architecture’ essentially consisting of a systematic network of operations, and how the system works as a whole – can be implemented to understand the passage of time in the context of tasks to be done, perform various forms of reasoning, handle system-wide learning at runtime, and work to expand its own capabilities based on experience, and doing all of these at the same time. Even if you come up with a solution to any single one of these features in isolation it is highly unlikely that you can then somehow work the other two into that, forcing you to start over from scratch. In other words, the only way is to address the problem holistically.

Ben: What are your thoughts on the risks of developing advanced AGI?  Do you think an AGI could get out of a researcher’s control and wreak some kind of havoc?

Kristinn: I think the highly common fear of “AI run amok in the lab” is a severely misplaced one – it is not likely to ever happen. Even if we assume that it *may* happen, there are other risks that are far more likely and much closer in time: we should pay attention to those first. The obvious and highest risks lie essentially in the undesired empowerment of devious individuals, groups, and governments: Powerful technologies can always be abused, and it is a hazard to the common man.

The empowerment that comes from AI technologies is very different from that resulting from many technologies we are familiar with, such as nuclear power, guns, dangerous biological agents, etc. AI running on a computer or a cluster provides new capabilities in the acquisition, digestion, and use of information; AI controlling a robotic body opens up yet another kind of possible use and abuse. In some ways it is more powerful than anything that came before, because none of the prior dangerous technologies can enable for example active and ongoing acquisition of information about people’s behaviors, their whereabouts, potential intentions, and activities, at a 1 minute, 1 meter resolution. Devious AGIs could reduce the power of open and free information access by producing false information, twisting the truth, blocking information in clever ways, etc., like some governments are actually doing with human intelligence today. But AGI could bring this to levels otherwise only possible by employing every single the citizens of a small nation like Finland. And there are countless other scenarios that one could come up with – but not very many are working on coming up with them. We should start thinking about the most obvious abuses right now, because we may not ever get the chance of doing it after the fact – ever!  By then it may be forever too late. The general rule is: Don’t give a psychopath a free run for the gun rack. We have legal frameworks to control the use of highly dangerous materials, technologies, and outfits – doing the same for AGI would be a start.

Hrafn: We have too many problems today—many of which threaten our extinction. We need help solving them. Having machines capable of inventing solutions to these problems at the risk of them wreaking havoc… that danger seems much less imminent than nuclear meltdowns.

If we’re talking intentional hostility then, frankly, I think an AGI could just as well consider Earth’s natural ecosystem a more sentient enemy. It’s a system of interconnected components that self-organizes (sort of like our brain) and can produce stuff like ice-ages & new species, after all. But I don’t know, it would take an AGIs smarts to know exactly what to think of evil ecosystems.

Ben: Do you think we’ll ever have AGIs massively smarter than people?  If so, what are your thoughts about the role of humanity in a world populated by such AGIs?

Kristinn: I think it is almost certain that we’ll have AGIs massively smarter than humans, and there are many potential ways to achieve that. With an artificial system we can engineer e.g. a million AGIs to behave like a single mind – more or less – and if you do that you already have a mind that is vastly smarter than a single human. Add on top of this more efficient use of the thought processes, to e.g. make use of facts stored in gigantic databases, and you take yet another step beyond single-human intelligence, or even group intelligence. If the AGI has very flexible meta-learning methods it will be implementing a feedback loop that can, over time, generate vastly more efficient ways of learning skills and facts – even more efficient meta-learning methods! If we – or a more primitive AGI – can come up with a doubling in meta-learning efficiency, then you don’t need very many cycles to vastly outpace human intelligence. At least one of these possibilities may play out over the next 40 years – possibly all.

The role of humanity in a world populated by such AGIs might not be hugely different from today, but it will be different in some respects. We would essentially have solutions for many of the major immediate threats to both individual and group health and livelihood; the vast majority of shortage of materials, food, water, etc. could be solved by one or more of these AGIs in cost-effective ways. Overpopulation would potentially also be solved. Some of the necessary solutions would come at some cost to some subsets of the human population, but most negatives would disappear soon, as the AGIs find better solutions. There is no doubt in my mind that these AGIs would be employed for the good of humanity in the same way that democracy is – for the good of the majority, with an eye towards minimizing the negatives for the minority.

One major difference between today’s reality and the one populated by AGIs – and one which is very difficult to discuss because it is hard to come up with believable scenarios of how and why it would happen – is the possibility that somehow the AGIs could come into position of controlling the world. As these are AGIs this is certainly a possibility. I suspect though that no AGI will ever be produced that has the same level, power, and quality of motivations as humans, and thus they will never ever ‘have the urge’ to take over the world. The one exception to that is if a devious individual or group specifically sets it as their target to produce just that.

Ben: Any thoughts on futurist Ray Kurzweil’s projections of a technological Singularity?

Kristinn: I disagree with the idea of a totalitarian singularity – but agree with the general idea that there are some aspects of a post-AGI world that are rather difficult to imagine and predict. However, they are conditional upon particulars which may or may not happen: If they don’t happen the world will be fairly unchanged, for a while anyway, with the introduction of AGIs. An example being if AGIs turn out to be extremely expensive for a number of decades, in which case we will have only a handful for a rather long time, and they would be more like museum pieces than anything else – a curiosity for entertainment, like a monkey in a zoo. My suspicion is that life on Earth will not be vastly different for the majority of its human population 100 years from now, in spite of the introduction of AGIs in the 2050s or thereabout.

Ben:  General comments on changes you’ve seen in the AI and AGI fields during your career, internationally and not  just specifically in Iceland?

Kristinn: I am surprised and dismayed at the lack of progress on, and lack of attention to, fundamental questions about the nature of intelligence, and how that can be introduced into man-made entities for solving all kinds of difficult questions. I am happy to be part of the AGI Society springing into being and providing a counterweight to the narrowly focused AI research that permeates both industry and academia – in some sense it can be understood in the case of the former, but in the case of the latter it is quite astonishing how little work is being done on for integrated skills, general skill acquisition, attention mechanisms, flexible and dynamic control of manipulators, etc. – looking at the big picture.

Ben: What do you mean by “constructivist” AI?  Why is it an important idea? And the same question about “self-programming”… I understand how Replicode in particular works (well,  sort of), but I’m more curious about the general perspective.

Kristinn: The methodology that people have been employing in AI research is based on ideas developed in the software and hardware industry over the last 40-50 years – the centerpiece of this approach is the human programmer. All programming languages that have ever been created, with hardly an exception, are created for human programmers. This means that you need human-level intelligence to understand them to the point where you can use them. Which means none of these can be used to implement a system that could improve itself at the code level. That is the paradox of self-programming. Yet without self-programming of some sort – certainly at the architectural level but quite possibly at the code-level – is necessary for AIs that can evolve on their own, without the aid of a programmer (human or otherwise) constantly tweaking and changing system from the outside. So all that we can do with current tools is program AI systems by hand; the level of self-programming possible within present methodologies is limited to automatic tweaking of a small number of parameters. To me it is clear that this will never suffice for creating AGIs. And therein lies the challenge: We need systems that can manage their own cognitive growth. That is what is meant by the constructivist view, as discussed by Piaget and others.

The new constructivist AI that I am arguing for extends well beyond the system managing its own growth: equally importantly it extends to the tools and methodologies that we use. I agree with prior researchers who say that we need to build AIs that can manage their own growth,  but in order to do that we *must* bring in a new set of tools. These tools include programming languages that give some hope of being self-inspectable: their operational semantics must be simple enough for automatic inspection and characterization, yet powerful enough to implement sophisticated cognitive functions at the architectural level. Why do we need self-inspection? Well, a growing system needs to be able to assess its own status now and then, and to do so it must inspect its own architecture. And an AGI architecture will – for the foreseeable future – be implemented as software. So the platform must support self-managed manipulation at many levels of detail, from code to architecture. There is some reason also to think this new programming language should be fractal, since then the same self-inspection and self-manipulation principles could be applied at many levels of detail, and that is important because there will likely be several, at least 5 or 6, levels of architectural detail that matter in the evaluation and modification of the system during its growth. The platform must support operations across a distributed network of processing nodes, to get the benefits of true parallelism, as the speed needed if we were to do this serially is, practically speaking, a remote possibility. As can already be seen from this list of features, present methodologies are very, very far from addressing these needs. My team and I, and in particular Eric Nivel, have already been working for some years on these principles, and the results are very promising. If we are right we should be demonstrating a significant step towards AGIs in 2013.

Ben: Singularity Institute futurist Eliezer Yudkowsky believes the biggest, scariest risk to humanity is precisely self-reprogramming AGI systems that rewrite their own goal systems, so that their newly re-written goals don’t include anything related to being nice to humans.   But you’re advocating precisely self-reprogramming AGI.  Wherein lies the crux of the difference between your perspectives, would you reckon?

Kristinn: Another misplaced worry. Of course, self-programming is not dangerous in itself. Like with most other things in the world, the danger lies in particular combinations of various – often supernumerary – things. Self-programming has in fact been possible for quite a number of decades, and many students have implemented self-programming systems. For self-programming to become even remotely dangerous quite a large number of things have to come together. To put this into the right perspective, and at the same time make a potentially very long story quite a bit shorter, compared to the virus-host system known as a human with the common cold – also a self-programming system – the virus relies on effective and stable physical principles to maintain its status in the world, and has had a very long time to establish itself as such in the world of animals. Because of the obscurity of the principles, and their stability, it is difficult for human intelligence to do anything about it, at least in the amounts of mind-months we are willing to spend on that issue at present. Yet the virus-host system in general has produced very few cases resulting in guaranteed death of the host. The principles behind an AGI rewriting its goal system would be based on much less effective principles, and certainly vastly less stable. While possibly not zero, the likelihood of this becoming a real threat to humanity is oh-so-much-lower than any of the abuses that I have already mentioned that it is not worth spending much time on it, if any at all. By that I am not saying that science fiction writers should not write stories that explore such possibilities. But I would prefer we leave it at that, and focus first on the immediate and most obvious – and much more likely – threats of AGI, especially since it is not even clear that we can solve those. And actually, in other fields, primarily biology, a much greater threat of self-programming exists: the meddling with the genetic code of what keeps us alive – food and the ecosystem at large. We can worry about all sorts of things – and sometimes we do. But even today we are not doing enough to understand the ramifications of various manipulations of the food chain, the Earth’s ecosphere, the oceans, the rain forests – how about doing first things first?

3 Responses

  1. Stefán Gunnarsson says:

    Wow, I never knew that you were here in Iceland or I would have tried to meet up with you in HR. Thank you for the article.

  2. The modular-thinking and self-programming systems is a great direction.

    IMHO the guys’ claims below are wrong, though. These believes explain the guesses of AGI arriving about 2050 or later… Let me challenge their prediction with several decades.

    “…How general creativity works & how it relates to general intelligence is one of the largest unanswered questions & obstacles….

    Kristinn: I am surprised and dismayed at the lack of progress on, and lack of attention to, fundamental questions about the nature of intelligence, …

    … it is quite astonishing how little work is being done on for integrated skills, general skill acquisition, attention mechanisms, flexible and dynamic control of manipulators, etc. – looking at the big picture.”

    Todor:

    The general creativity question is trivial, in general sense it’s answered by at least two or three (known by me) researchers, one of whom is “universally creative” in a sense of managing and improving in all kinds of arts (all, creative + performing, all kinds of modalities, media, … ) + science + maths + computer science + philosophy + languages.

  1. November 7, 2012

    […] Ben Goertzel published an article in h+ Magazine titled “Land of Fire, Ice and Thinking Machines: The Recent Rise of AI in Iceland, and an Interview w…. In the article the author reviews the recent history of AI in Iceland and presents an interview […]

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/