Eliezer Yudkowsky from the Singularity Institute is being interviewed. He speaks on the subject of the technological singularity – when artificial intelligence advanced to the point it can modify and improve itself, potentially accelerating its acceleration, causing an explosion of intelligence.
Eliezer’s basic argument is that as soon as a self-improving AI is created, its launch constraints will determine whether it will be up to good, or no good. He compares this with Ghandi being offered a pill that would allow Ghandi to speed up killing people; naturally, Ghandi (because of the way his mind works at that point) will refuse to take this pill.
Eliezer believes this to be a very important issue and critical point – “if we get this wrong, it doesn’t matter what else we got right” – adding that because it’s so important, mainstream news largely ignores the topic. “I think that right now the human species is stuck in a sort of awkward phase where we’re smart enough to make tremendous problems for ourselves, and not quite smart enough to solve them.”
“The thing that I’m most worried about,” Eliezer goes on to say, “is not that somebody’s going to maliciously and deliberately build an artificial intelligence that kills people. The thing I’m worried about is that it’s going to take very deep understanding and very precise work to actually read into mind space and pull out a helpful mind. And I’m worried that someone’s going to underestimate the difficulty of this problem, and proceed on a vague theory, and get some exciting results... and encouraged by this, rush ahead, and... doom us all, to put it bluntly.”
I wonder if we really get the chance to pick the mind space ourselves, or if the larger system of technological evolution will handle this job – mind space reaching out onto us, so to speak – making whatever tech company creates the most evolutionary fit AI become a success, thereby annulling the efforts of no matter what quantity of tech companies carefully picking other “friendly mind spaces.” (Not that that would be an argument against trying to ensure a friendly AI.)
As a side-note, whenever we talk about AI, it’s helpful to keep in mind that one of Google’s top goals, according to internal communication, is to have the world’s top AI research laboratory. (Other of their goals from these internal papers, like universal search, or a search timeline view, have recently been officially rolled out – at least perhaps, as it’s hard to tell exactly what their internal goals refer to –; whether or not they’re already the world’s top AI lab is an interesting question.)
[Thanks Bruce Klein!]
>> More posts
Advertisement
This site unofficially covers Google™ and more with some rights reserved. Join our forum!