Big Budget AI is Back
It was in 1956 that the Dartmouth Summer Research Project on Artificial Intelligence convened stating as their objective, “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” It was meeting of just 10 leading electrical engineers of that time, but it launched the entire field of enquiry now known as artificial intelligence.
By the 1980s, the initial ambitions had grown into an initial industry of sorts with numerous companies developing enabling technologies, artificial intelligence toolkits, specialized hardware, as well as a burgeoning largely consultant based business of “knowledge engineering” which entailed extracting knowledge from human experts and encoding it in a machine readable format using specialized languages. Companies developed specialized hardware to support these efforts based around the programming language LISP. But demand for these devices collapsed by 1986-87, leading to what was termed the “AI Winter” where research budgets were slashed and expectations for the technology were drastically lowered across the board. The rise and fall of AI became the archetype case for the Silicon Valley “hype cycle”.
Fast forward to today, and big budget and high ambition artificial intelligence research is back.
Of course IBM with their impressive software Watson appearing on Jeopardy re-ignited people’s imagination about what AI software could do. And Watson is now being further developed for medical applications holding out the promise of a radical alteration in how people get medical advice. Ray Kurzweil became Director of Engineering at Google shortly after publishing his most recent book on intelligence and AI.
Many people today already have hands on experience with AI software such as Google’s search engine and Apple’s Siri as well. The idea is no longer that strange or frightening to people that have hands on experience with these existing early systems. But these consumers are savvy and also know that performance of all of these systems could still use a lot of improvement.
Best known AIs: Watson and Siri not HAL
This week things heated up further in the already overheated AI space.
First, an announcement by Paul Allen that he was hiring Dr. Oren Etzioni, a professor in the University of Washington Computer Science Department. Etzioni went to Harvard and received his doctorate from Carnegie Mellon University. Etzioni’s MetaCrawler was a popular early internet search engine and he founded several other start up companies. In the press release announcing Etzioni was joining to become Executive Director of an artificial intelligence institute to be based in Seattle. Allen also indicated that he hoped AI2 would follow in the footsteps of his 200-person, $400 million dollar Allen Institute for Brain Science. Etzioni stated, “Our goal is to revolutionize the field, and with Paul’s vision and support, the sky’s the limit.” Etzioni’s initial work may be extending Project Halo which is working on developing software that can acquire knowledge from science texts, crowdsourcing, and manual input, with the goal to develop an AI that can successfully pass high-school-level biology tests.
Further developments, the National Science Foundation (NSF) announced that one of three new research centers funded through its Science and Technology Centers Integrative Partnerships program will be the Center for Brains, Minds and Machines (CBMM) based at MIT headed by Tomaso Poggio, the Eugene McDermott Professor at the Department of Brain Sciences at MIT. The five-year award will enable the center’s researchers to benefit from the expertise of neuroscientists, engineers, mathematicians and computational scientists through a global network of academic, industrial and technological partnerships. Like all the centers funded through the program, CBMM will initially receive $25 million over five years and will partner with 19 other institutions and organizations.
CBMM grew out of the MIT Intelligence Initiative, an interdisciplinary program aimed at understanding how intelligence arises in the human brain and how it could be replicated in machines. The NSF release states, “Recent advances in areas ranging from artificial intelligence to neurotechnology present new opportunities for an integrated effort to produce major breakthroughs in fundamental knowledge.” Work at the center will have the specific goal to build more human-like machines and work towards establishing a theory of intelligence. While perhaps not a household name, Poggio’s research was extremely influential including on some of my early work in applications of neural networks.
Another AI related development, Intel announced it was acquiring natural language processing startup Indisys, for a price of $26 million and change. Indisys makes software for processing and understanding language and the company’s website features examples of expressive virtual assistants capable of answering questions similar to Apple’s Siri. The investment follows other moves by Intel into the general area of perceptual computing.
BMW’s just-launched “i Genius” is a“software system employing artificial intelligence technology to interact with potential customers of the BMW i3 or i8.” The idea is essentially to use a Siri style text based virtual assistant to answer questions about cars. The i Genius app user (only currently available in the UK) simply texts their question to 84737. The intelligent software system then responds also via text message.
Dmitry Aksenov founded technology the company London Brand Management, an AI service for big brands who want to outsource customer or staff interactions to computers. Mr Aksenov predicts that the software will soon outperform human customer service agents, “Within five years we will have a system that truly knows more than a human could ever know and is more efficient at delivering information,” he said. “It will replace many of the boring jobs that are currently done by humans. Unfortunately, this may take some jobs from the economy by replacing human beings with a machine. But it is the future.”
Even Microsoft is getting into the game according to ZDnet’s Mary Jo Foley. Foley shares CEO Steve Ballmer‘s excitement over an upcoming user interface that would be “deeply personalized, based on the advanced, almost magical, intelligence in our cloud that learns more and more over time about people and the world.” The project is apparently codenamed “Cortona”:
Cortana takes its codename from Cortana, an artificially intelligent character in Microsoft’s Halo series who can learn and adapt.
Cortana, Microsoft’s assistant technology, likewise will be able to learn and adapt, relying on machine-learning technology and the “Satori” knowledge repository powering Bing.
Cortana will be more than just an app that lets users interact with their phones more naturally using voice commands. Cortana is core to the makeover of the entire “shell” — the core services and experience — of the future versions of Windows Phone, Windows, and the Xbox One operating systems, from what I’ve heard from my contacts.
Reports about Cortana are sparse and seem somewhat exaggerated so far. However leaked images of what is supposedly an “early” build of the next-version Windows Phone OS clearly show something called “zCortanaApp” among a list of infrequently used items.

As if to fuel the fire, Google’s recent decision to release to the open source community deep learning based text analysis software word2vec promises to transform the way applications can employ and manipulate text. Beyond this, machine learning applications and various narrow AI applications are making inroads into industries and businesses around the world with little fanfare.
You might have heard about the launch of Japan’s AI based Epsilon rocket, but numerous small AI projects are finding success in vertical and niche markets, which importantly includes games and defense.
Taken together, these projects illustrate a global initiative to accelerate the development of general artificial intelligence, machine learning, and applications of intelligent systems. Even without a general theory of intelligence, these products and services are going to be increasingly pervasive and will advance quickly. We can in addition expect some great new minds to enter the field as a result of NSF investment and development of centers of excellence such as the CBMM.
Small-budget AI is still going strong and with the e-book http://www.amazon.com/Singularity/dp/B00F8F1FG0 the memes of small-budget AI algorithms are embedded within the semi-science-fiction story.
Nice article, but it makes me sad that the only pictures of females are eye candy. I know this is male dominated field, but this is confirming our own stereotypes.
Hi thanks for the feedback.
The article depicts the virtual characters “Maya” and “Cortana” as they are shown on the Indisys website and within the game Halo.
I do agree that choices about avatars have implications beyond the obvious and can also present racial or gender biases of various sorts. There is some literature about this subject, i.e. http://vhil.stanford.edu/pubs/2009/groom-racial-embodiment.pdf
“…based on the advanced, almost magical, intelligence in our cloud…”
“Reports about Cortana are sparse and seem somewhat exaggerated so far.”
Yeah, it is getting to the point where every app and program needs to be sold as AI for it to be viewed as cutting edge.
“We can in addition expect some great new minds to enter the field as a result of NSF investment and development of centers of excellence such as the CBMM.”
As usual, most of the AI offerings are hyped, whereas a minority are cutting edge, and just a few are revolutionary. I personally expect big things from the D-Wave Quantum Computing AI Lab that Google and NASA are opening soon. It is my belief that proprietary AI algorithms are the solution, and if we can just hobble together a working SAI, it will then go back and simplify/amplify it’s own programming.
In other words, the true AI pioneers are the ones who ask the D-Wave the best questions.
What question would you ask?