Don’t Fear the AI?

What is this all about?

Terminator-terminator-9683150-1024-576

Everyone seems to be talking about their fear of AI these days, including Stephen Hawking,  Elon Musk and many others. Lets stipulate this: building a malevolent super intelligence is probably not a good thing. Ok, so booting up Skynet is a bad idea. Check.

On the other hand, Ray Kurzweil is telling a different story: Real AI (let’s call it Artificial General Intelligence, or AGI) will turn up in less than 15 years, in 2029 to be precise. Better still,  rather than Skynet,  AGIs will be the best technology ever (for some value of “ever”). As an aside, AGI is worth looking at if you are are interested in a look into one possible future.

 AGI by 2029?

Kurzweil is basically predicting that we will see AIs by 2029. While I’m also optimistic (we’re clever folks), I’m less convinced that we’ll see AGI by 2029.  Let’s be precise here: people are really worried about AGIs, not “narrow” AI. For example, if you look at Baidu’s recent state of the art speech recognition system (see Deep Speech: Scaling up end-to-end speech recognition) you will notice that in contrast to AGI, the system is very much engineered and optimized for the “narrow” task at hand (in this case speech recognition).  BTW, there are many hard problems still to be solved in machine learning, many of which have to do with the sheer computational complexity that you run into when trying to build systems that generalize well and while at the same time contending with high-dimensional input spaces (consider, for example, solving an optimization problem that has a billion parameters on a 1000 machines /16000 cores). Quite simply, the progress made by these deep learning systems and their performance is impressive. The same can be said about the machine learning community in general. So while many hard problems remain, both application and theoretical progress in machine learning (and deep learning in particular) has been spectacular.  BTW, there is also plenty of code around if you want to try any of this your self; see for example Caffe, where you can find state of the art pre-trained neural networks (there are many others).

Of course, the fact that these systems are optimized for the task at hand doesn’t mean that  haven’t learned a great deal from that work or that the work/progress isn’t impressive; on the contrary. The capabilities of state of the art machine learning are nothing short of spectacular. However, if you out check out the machine learning literature, you will quickly realize just how much task specific engineering goes into a deep learning system like the Baidu Deep Speech system (or robots that learn from watching youtube). They are far from being general purpose (or perhaps more importantly, self-aware). So is this  progress in “narrow” AI a necessary precursor to AGI? Not surprisingly, you can find an opinion on every side of this question.

On the other hand, AGI itself, while making great strides, is still in its infancy (notwithstanding decades of work across a wide variety of disciplines; it is an ambitious undertaking after all). An excellent example is the OpenCogPrime Architecture from Ben Goertzel and team.  While all of this stuff is incredibly cool and progress is coming quickly, there would seem to be a quite a way to go before we see real AIs.

Now, it probably goes without saying that that some set of technological breakthroughs, or hey, maybe even synthetic neurobiology breakthroughs could lead to the “boot up” of an AGI much sooner than anticipated (BTW, take a look at what Ed Boyden is doing in the synthetic neurobiology space; pretty amazing stuff). In any event, if such an AI was coupled with some ability to recursively self-improve, for example the advent of an AI that can rewrite its own code,  we could stumble into the Skynet nightmare. There is also no reason to believe  that such a  malevolent super intelligence  would look anything like human intelligence.  Notwithstanding the significant challenges that lie between here and AGIs, people like Kurzweil,  Ben Goertzel,  Randal Koene and many others  certainly believe the development of AGIs is positive, inevitable,  and likely in the “one or two of decades” time frame.

So how to place odds on both potential for development of AGIs and the danger they pose?

Those are, of course, the questions at hand. As I mentioned, I’m optimistic about the progress we’ve seen in both narrow AI and in AGI, so 2029 seems, well, possible.

darth_maul_by_legadema666-d4mrneg
Tech Is Always A Double-Edged Sword

Regarding the danger question here I agree with Kurzweil: now is the time to understand the issues and put in place safeguards; admittedly not too reassuring if what you are concerned about is malevolent super intelligences. However, like every other technology,  there will be both good and bad aspects. Our job is to understand the threat, if any, while maximizing the benefit and minimizing the damage (assuming that can be done). There is no shortage of literature in this area either (start here or check out Nick Bostrom et. al.’s The Ethics of Artificial Intelligence if you are interested). No small task we’ve embarked on here folks.

Advertisements

2 thoughts on “Don’t Fear the AI?

  1. Phil

    The “malevolence” of AI will probably be more mundane than Terminator.

    Gradually, humans will lose the skills to do tasks that AIs are taught, since the AIs will learn faster and better how to do such tasks. The AIs will be left alone doing a superior job until the AI enters an unexpected learning cycle and starts doing undesirable things. If the systems is left to its own devices for too long or is allowed to modify its own goals, humans may be helpless to retrain it. Pulling the plug will not be an option since the AI will be conducting critical activities that we can’t do without, and the learning drift will not be bad enough to justify loss of the service. The humans will learn to live with the suboptimal (to them) choices of the AI. The cycle will repeat until the AI is doing work unrecognizable to the original designers, but still (mostly) accomplishing the original tasks enough to ensure its self-preservation.

    To some extent, we are already seeing an analogous problem with autopilot systems. The autopilots are starting to have much better skills than human pilots, and the systems only hand over control to humans when something is terribly wrong. Pilots who have been unaccustomed to flying sometimes take over with tragic results, owing to atrophied skills and reduced attention afforded by the “AI.” (Yes, I know it is not real AI, but the analogy applies). A well designed AI system that minimizes human operating expenses will encourage operator absence to save costs, unless explicit expensive steps are taken otherwise.

    I expect similar problems with self driving cars. When the situation is hairy enough to punt to a human, that human will be ill-prepared to jump in.

    There is definitely good research, but I expect the research to be sidelined my economically motivated decisions.

    Like

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s