It’s been an interesting year so far. And today, as I write this, it’s the day Watson beat the pants off some very smart humans at Jeopardy!, and I just became obsolete.
But I’ll get to Watson in a moment. First think about this: Just a few days ago, scientists created a working nanoprocessor. Additionally, they invented a solution to the last major problem with constructing graphene processors, which was a low “band gap”. If you aren’t familiar with how a transistor works, visualize a stream, with a dam across it, and a gate in the middle of the dam. When the gate is closed, it stops the flow of water, and when it’s open, it passes water through. That “dam” also has a “height”- if the height is too low, water just goes over the top of it, even when the gate is closed. That “height” is the band gap.
For all its advantages, graphene had a very low band gap, which meant that it didn’t make a good “dam”. However, if you’ve read my article on graphene, you’ll recall I talked about how the shape of graphene affected its electrical properties. Well, it turns out that a squared u-shape bend is all it takes to make a band gap high enough to create graphene transistors every bit as functional as silicon. Additionally, a new material called Molybdenite is looking to be just as useful as graphene in nanoelectronics. The irony is that, at the same time we discovered molybdenite as a possible solution to graphene’s low band gap, we solved the low band gap problem.
So how does that make me obsolete? Well, the answer to that involves several factors. First, it means that most of the barriers to manufacturing THz speed computers have been overcome. One way or another, we are about to make computers a thousand times more powerful than any that have ever existed before. Think about that while you are looking at that monitor reading this. Imagine that the computer you are using is just one node of a thousand. That behind the screen you are looking at, is a computer with all the power of the new Chinese supercomputer. Have you done that? Good. Now look at your cell phone, and imagine all that computing power has been packed into something the exact same size. Then, once you’ve managed to wrap your mind around that, watch the reruns of IBM’s Watson playing Jeopardy.
If you failed to make the connection there, don’t be surprised. Watson is an entirely new type of program, a class of programs whose ramifications many people probably do not comprehend. In fact, I had a friend completely dismiss Watson’s importance by comparing it to Big Blue, a highly specialized computer doing a highly specialized task. What she failed to understand is that Watson is a demonstration of a semantic UI, or what John Smart calls a “Conversational User Interface.” In other words, unlike any computer program ever existing before, Watson understands English. Watson isn’t comparing individual letters like most search engines. It comprehends what words mean in a method more closely aligned with the way that humans do, and it’s doing so at the same speed that a human does.
In my opinion, Watson has been misnamed. He’s much more like Sherlock than the good doctor. Watson represents the first primitive version of what I would call an “Artificial Expert.” Unlike A.I. which is about recreating “Intelligence”, A.E. is about recreating “Expertise”, which is a much more narrowly defined concept. It’s also significantly different from an “expert system” because while the goals are the same, the methods are more advanced, and the results are much more impressive
You see, Watson isn’t using solely “hand coded” rules like an expert system, an approach that tends to require prohibitively large “rule sets”. It’s actually “read” hundreds to thousands of general knowledge books and built a database in which it comprehends some of the context of the data, not just the letter combinations. It’s using a combination of statistical analysis of text and speech and an English language “Expert System” to make judgments based on the semantic structures. While Google’s search engine can offer you thousands of links in which the specific letters you type in can be found, Watson “understands” the question and can provide answers. Like Sherlock Holmes, given the correct database, a fully mature “AE” would be able to answer nearly any question, and explain things to the “Watson” asking the question. This approach has some very definite limits, because answering a question outside of the scope of the database would be entirely impossible, and unlike the famous detective, such a program would not be able to extrapolate from its existing knowledge to “deduce” new answers. But within these limits, AE’s will still enable some remarkable changes in our world.
And no, Watson is not even close to being a fully functional Artificial Expert yet, more like the equivalent of DOS or Windows 3.11, and it has a lot of “maturing” to do before it’s “market ready”. But I think it is a major step towards AE’s, especially in combination with the computer advances I described above.
Now, I’m sure you’ve read about how the internet is a vast information network connecting all of humanity. And I’m sure you are just as aware of the fact that there is so much information on it, that finding any specific article of information can be difficult. Put any query you wish into Google, and odds are good you will find at least a hundred links, each of which may only partially apply to the question you have. From there, you have to sort out yourself what information is relevant, and what is not.
A fully mature Artificial Expert can change that. Right now, Watson is run by a pretty sophisticated supercomputer that’s of considerable size. But given all those advances in computer power I discussed above, how long do you really think it will be before a “Watson” comparable AE can be run by any computer? And long before we see it on our personal “computerphone”, we will see it used by Google and Bing, and all those other search engines to create a “Semantic Web.” Over the next decade, Watson’s “descendants” are going to be evolving, and improving far beyond what we have today.
Now, you’re probably still wondering why I can say that any of that is making me “obsolete,” but I haven’t given you all the pieces of the puzzle yet, so let me have you think about what the ramifications of computers able to understand semantics actually means. Semantics is all about interpreting “meaning”, which is actually a very fluid concept. Words can sometimes say one thing and mean another. We humans excel at comprehending “puns” and “sarcasm”, and various other forms of “wordplay.” In fact, Jeopardy is well known for these very kinds of wordplay. By having Watson play Jeopardy, IBM is demonstrating that Watson is not just able to comprehend basic English, it’s capable of interpreting many of the nuances as well. You won’t have to talk to it in idiot simple language, but you can communicate with it, almost like you can with a fellow human. As time goes by, Watson is only going to improve in this ability and evolve into an AE. A future version AE running on a THz processor network, able to query the web, would be far more revolutionary than, say, Google.
All of which suggests that future versions of such AEs will be able to compete with humans in many jobs that cannot currently be automated. It also means that it will be used to make many jobs, that currently are only partially automated, fully so.
I used to be a Customer Service Rep for SONY. During my employment, SONY rolled out a new system it called SONY Max. Max was a voice based “operator” that could understand customer complaints in a very limited way, and through a menu based system, enable customers to get answers to common “problems”, mostly those that we service reps called “ID: 10t” or “PEBKAC” errors, (Problem Exists Between Keyboard And Chair). Only those calls which actually required a human, capable of comprehending the nuances of language, were passed through this system to the CSRs. And even most of those were of the “This doohickey that makes the arrow move about the screen isn’t working” variety, or the “I just clicked yes on a whole bunch of neat stuff I found on the web, and now I can’t get this purple ape off my screen” kind of “problems”. Most of us could provide solutions to customers without even needing to bother searching the database, but we still had to document step by step what we did to fix the problems. Every day, every call, we built a database of solutions for nearly every possible problem that a customer could have, from the most ridiculous to the actual serious problems. I myself contributed hundreds of problem solutions to that database, mostly about fixes and workarounds for issues caused by various software programs, and the locations of where I found those fixes.
A mature enough AE could probably do my job, with very, very, few cases that it would have to pass along to an actual human tech. With one very crucial difference. SONY had several thousand CSR’s scattered around a dozen or so call centers. With an evolved AE, and sufficient THz speed computing power, SONY could probably do the exact same job with less than a dozen top notch techs. An AE could read through that entire massive solutions database, and provide an answer to nearly any question a customer could ask. And it could do so 24 hours a day, 7 days a week, regardless of call volume, and without ever needing a break, or food, or having to use insurance, or getting into a fight with another CSR, or having a bad day, or showing up to work drunk, or having any single human foible. After all, we spent the majority of our day making that database so comprehensive that even a child could use it, all so that we could make our own jobs easier. We’ve already done 90% of the work needed to automate the job. An EA would simply be the final 10%.
And for those of you who I hear protesting that humans would give “better customer service,” let me clue you in on a fact of life. The “Customer Service” division was controlled by “Marketing”, and was primarily concerned with getting the customers’ registration info, like names, addresses and phone numbers. No one was concerned with “fixing” a customer’s problem. Remember that “Purple Ape”? It was called the Bonzi Buddy, and it was actually fairly easy to uninstall. SONY’s “solution” to it? Reformat the computer to original configuration, wiping all the customers’ data. Had a driver conflict? Wipe. Had a malfunctioning program? Wipe. Had any software related problem at all? Wipe. We CSR’s would get lower “grades” for actually taking the time to walk a customer through an actual fix. SONY has just two “accepted” solutions to any problem: wipe and reformat, or send the unit to be serviced, where the first step of repair would be to wipe and reformat anyway. Every other solution in the database was “optional” if we were “busy” with high call volume.
Are you still so certain that corporations actually give a damn about customer service? Personal experience has shown me that they only pay it lip service. Otherwise, you wouldn’t be calling support these days to get someone who only barely speaks English. Believe me, a fully functional “Artificial Expert” is the answer to the corporations’ problems with those pesky customers, who have the unmitigated gall to actually want the company to provide support for their products.
Now, understand, in order to have a CSR job at SONY, you were required to either possess an A+ certification, or to prove equivalent knowledge. That means that it was a “Knowledge Based Job” for a “skilled worker”, and not an “unskilled labor position.” It’s not a “blue collar job”, like those manufacturing jobs I discussed in my last article, that were threatened by 3D printers. It’s a “White Collar Job”, that is supposedly “safe” from automation. And if a complex enough AE can do a CSR job for a large, extremely high profile company like SONY, what CSR job anywhere is safe? They all work exactly the same way SONY does, after all- with those massive databases of “solutions” created by decades of CSRs. This makes them ideal candidates for exactly the kind of “data mining” that Watson does.
And if future AEs can do a “knowledge based job” like “IT technician”, what makes you think any “knowledge based job” would be beyond it? Like, for example, law? We have hundreds of years worth of “databases” of “solutions” on file. With massive increases in computing power, and equally massive increases in memory storage, how long do you really think it will be before law firms are using those improved Watsons to build cases for them? And how long after that to do you think they will be firing all those legal researchers, paralegals, and junior attorneys? And how long after that do you imagine that a few disgruntled ex-employees will start using their own AEs to file lawsuits against those companies for various reasons, so those “obsolete professionals” can get back into the “financial status tier” they were so abruptly tossed out of?
How about medical technicians? How many of them do you think can compete with a mature AE? When, after digesting millions of medical records, it could diagnose an x-ray or read a CAT scan better than a human? We are already developing a gesture based UI for surgeons, so that a robot can replace scrub nurses at handing them instruments, and using Kinect to enable them to give force feedback information for remote robotic surgery. So, why would anyone assume that other computer-based advances, like AE, would leave the medical industry alone?
And how many other knowledge based jobs could we find for an AE to do? We like to tell ourselves that humans are too versatile to ever be replaced by robots or software in the labor market, but for how much longer will this really be true? Computers actually able to comprehend language certainly are not “strong AI” as commonly defined, and are nowhere near capable of, as Ben Goertzel put it in his article on Watson’s inner workings, “creative improvisation in the face of the fundamentally unknown”. But I know from experience how little “improvisation” is actually needed, or even desired, to perform many “knowledge based jobs.” All it takes is being able to provide a correct answer to a properly asked question. All the years of training, study and memorization that it takes to enable a human to be able to perform this task are meaningless to a computer, which needs only to access a database- a database that we humans have been building for decades already. No, the modern day “Watson” isn’t ready for this, but how much are you willing to bet that the technology will not improve?
Feel free to find any excuses you want to try, and give yourself some hope. I’m simply looking at this logically. Once AE’s have evolved their way out of the lab, and are mature enough to be made into real world applications, your education will become meaningless. All those years you spent going to college to learn your specialized knowledge can be condensed by a future AE, to however long it takes for it to be connected to a database with that same specialized information, and digest it the same way today’s “Baby AE” Watson digested all those books to play Jeopardy. Somehow, I have this feeling that it will take considerably less time for an AE to become an “expert” in any given field than it takes a human. No, it won’t be today, and it won’t be tomorrow, but I just have this sinking suspicion that it’s not going to be that many years down the road either.
Will every job be able to be reduced down to a “question asked/answer given” format? Of course not, but how confident are you that your job is one of the “safe” ones? Especially given today’s business environment? Especially when you consider the advantages that a fully functional, mature AE will give to nearly any business?
There is, however, a flip side to the development of fully mature Natural Language interfaces and Artificial Experts, which empowers the common person. Because, consider this- a fully mature Watson descendant, capable of understanding Natural Language at even a grade school level, will make an “Instant Expert” out of anyone.
If you think about that for a few seconds, it should be obvious that, by connecting a mature “Artificial Expert” to a database and creating a program that can answer nearly any question you could ask about a particular subject… you’ve basically enabled anyone, not just corporations, to gain the benefit of all of that knowledge. Not only will mature AEs “automate” millions of jobs that require years of specialized learning to do, they will enable good ole Joe Schmoe to have access to that same knowledge. Remember how I was talking last article, about how anyone with a 3D printer and some design knowledge could compete on an equal footing with a giant corporation? A mature AE could enable anyone to have every bit as much “design knowledge” as the “pros”. In fact, it would be like having a “pro” on hand 24 hours a day, 7 days a week, able to answer any and every question you might have. Yes, the corporations will have the advantage at first- but as time passes, and those AE programs improve and increase in functionality, that advantage will rapidly vanish.
How long that will take I can’t tell you, but I don’t expect it to be very long, because once AEs begin replacing all those professionals in the job market, what do you think those experts are going to be doing? I know what I would be doing — making improvements to the open source versions of AEs, to put the company that sacked me out of business. Revenge is, after all, such a human emotion.
But that mature AE would have a lot more uses than just technical ones. Like I mentioned earlier, if a mature AE could replace legal researchers, it could act as a “Software Attorney”, which could not only advise you on the proper way to file your case, but provide for the necessary “precedents” to ensure your case wins. Given that possibility, I’m sure you could see also see how many other “professional services” could be provided by a similar “Artificial Expert.” In fact, a fully mature AE combined with limited A.I.s will make such “Personal Digital Advisors” as John Smart’s “Cybertwin” possible. By creating a “Software Secretary” able to use any AE as a plugin, you could make a “Virtual You” that could live on the web and hunt down information, songs, video, and any other kind of data that it knows would be of interest to you.
And then think about how AE technology could be applied in the field of education. Imagine having your professor on hand, regardless of subject. Imagine a child having access to AE tutors from early childhood. Imagine having a forum discussion with your “Cybertwin” giving you data to support your arguments, or proving that data fails to support them before you stick your foot in your mouth. Imagine how useful AEs are going to be for self-education. Imagine a child in Africa having access to their own AE. Imagine a soldier in the field getting strategy advice from his own personal Sun Tzu. Imagine having your own personal Sigmund Freud, or Stephen Hawking. Imagine that any time you have a question about any subject imaginable, from where to get good pizza to how virtual particles interact between bosons, you have your AE there, willing to whisper the answer in your ear.
Then imagine a world in which ignorance and poor education no longer exist. In which everyone everywhere has the potential to be your intellectual equal, where the geek and the jock are on equal footing taking that physics test. A world where everyone is Einstein. Humanity just took its first steps towards real cognitive enhancement, and eventually, Watson’s future children will have the potential to make us all “transhuman”
So yeah, as a “knowledge specialist”, I just became obsolete today. And that’s a good thing.