Tag Archives: Skynet

Don’t Fear the AI?

What is this all about?

Terminator-terminator-9683150-1024-576

Everyone seems to be talking about their fear of AI these days, including Stephen Hawking,  Elon Musk and many others. Lets stipulate this: building a malevolent super intelligence is probably not a good thing. Ok, so booting up Skynet is a bad idea. Check.

On the other hand, Ray Kurzweil is telling a different story: Real AI (let’s call it Artificial General Intelligence, or AGI) will turn up in less than 15 years, in 2029 to be precise. Better still,  rather than Skynet,  AGIs will be the best technology ever (for some value of “ever”). As an aside, AGI is worth looking at if you are are interested in a look into one possible future.

 AGI by 2029?

Kurzweil is basically predicting that we will see AIs by 2029. While I’m also optimistic (we’re clever folks), I’m less convinced that we’ll see AGI by 2029.  Let’s be precise here: people are really worried about AGIs, not “narrow” AI. For example, if you look at Baidu’s recent state of the art speech recognition system (see Deep Speech: Scaling up end-to-end speech recognition) you will notice that in contrast to AGI, the system is very much engineered and optimized for the “narrow” task at hand (in this case speech recognition).  BTW, there are many hard problems still to be solved in machine learning, many of which have to do with the sheer computational complexity that you run into when trying to build systems that generalize well and while at the same time contending with high-dimensional input spaces (consider, for example, solving an optimization problem that has a billion parameters on a 1000 machines /16000 cores). Quite simply, the progress made by these deep learning systems and their performance is impressive. The same can be said about the machine learning community in general. So while many hard problems remain, both application and theoretical progress in machine learning (and deep learning in particular) has been spectacular.  BTW, there is also plenty of code around if you want to try any of this your self; see for example Caffe, where you can find state of the art pre-trained neural networks (there are many others).

Of course, the fact that these systems are optimized for the task at hand doesn’t mean that  haven’t learned a great deal from that work or that the work/progress isn’t impressive; on the contrary. The capabilities of state of the art machine learning are nothing short of spectacular. However, if you out check out the machine learning literature, you will quickly realize just how much task specific engineering goes into a deep learning system like the Baidu Deep Speech system (or robots that learn from watching youtube). They are far from being general purpose (or perhaps more importantly, self-aware). So is this  progress in “narrow” AI a necessary precursor to AGI? Not surprisingly, you can find an opinion on every side of this question.

On the other hand, AGI itself, while making great strides, is still in its infancy (notwithstanding decades of work across a wide variety of disciplines; it is an ambitious undertaking after all). An excellent example is the OpenCogPrime Architecture from Ben Goertzel and team.  While all of this stuff is incredibly cool and progress is coming quickly, there would seem to be a quite a way to go before we see real AIs.

Now, it probably goes without saying that that some set of technological breakthroughs, or hey, maybe even synthetic neurobiology breakthroughs could lead to the “boot up” of an AGI much sooner than anticipated (BTW, take a look at what Ed Boyden is doing in the synthetic neurobiology space; pretty amazing stuff). In any event, if such an AI was coupled with some ability to recursively self-improve, for example the advent of an AI that can rewrite its own code,  we could stumble into the Skynet nightmare. There is also no reason to believe  that such a  malevolent super intelligence  would look anything like human intelligence.  Notwithstanding the significant challenges that lie between here and AGIs, people like Kurzweil,  Ben Goertzel,  Randal Koene and many others  certainly believe the development of AGIs is positive, inevitable,  and likely in the “one or two of decades” time frame.

So how to place odds on both potential for development of AGIs and the danger they pose?

Those are, of course, the questions at hand. As I mentioned, I’m optimistic about the progress we’ve seen in both narrow AI and in AGI, so 2029 seems, well, possible.

darth_maul_by_legadema666-d4mrneg
Tech Is Always A Double-Edged Sword

Regarding the danger question here I agree with Kurzweil: now is the time to understand the issues and put in place safeguards; admittedly not too reassuring if what you are concerned about is malevolent super intelligences. However, like every other technology,  there will be both good and bad aspects. Our job is to understand the threat, if any, while maximizing the benefit and minimizing the damage (assuming that can be done). There is no shortage of literature in this area either (start here or check out Nick Bostrom et. al.’s The Ethics of Artificial Intelligence if you are interested). No small task we’ve embarked on here folks.