Google Blogoscoped

Wednesday, November 9, 2005

The Technological Singularity

Read this document from 1996 by Eliezer S. Yudkowsky to understand what the technological singularity is all about. In a nut-shell, the text ponders what can happen once an Artificial Intelligence is itself producing another, better AI, which in turn produces another, better AI... and so on. As the speed of this intelligence grows exponentially, it would very quickly lead to an end of some sort – the singularity. Eliezer thinks older estimates this singularity is bound to happen in 2035 are conservative, and prognosticates the year 2021. So the point when “humanity will become capable of surpassing the upper limit on intelligence that has held since the rise of the human species,” as the Singularity Institute puts it, is in our lifetime.

But what will be the first true AI to start the process then? Well, that’s hard to tell. It could be nanotechnology, the author argues. It could also be the global brain – that is, the vast inter-connected array of computers we call the internet. A data miner (and in a way, intelligence miner) like Google Inc. could play an important role here. Google engineers are interested in AI if only for the purpose of creating that perfect answer machine; just an input box, perhaps, but one that will seemingly personally answer the question of millions of users at the same time – highly addictive, and thus highly lucrative (see my video of the Google Brain for an illustration).

Aren’t there any natural limits imposed upon us by nature, or technology? Again, hard to tell. It might be there’s a limit we don’t see; it might be we’re never able to program a real, working AI. It might also be that we may be able to create that AI, but because of the underlying mechanisms we used to create it (like semi-natural evolutionary algorithms) we won’t be able to control it. In Isaac Asimov’s sci-fi stories, as you may know, there were 3 (later 4) laws of robotics... a basic moral rule-set, like for the AI to not hurt humans, and so on. But implementing such a rule-set isn’t trivial at all. Truly intelligent beings learn, and change their views by doing so – if they couldn’t do that, they wouldn’t adapt well (they’d fall into infinite loops, bumping against walls endlessly because there was something their creator, a human programmer who’s imperfect, didn’t think of – in fact, these “bugs” were what made Asimov’s robot stories interesting).

The singularity already started, because humans today can communicate faster than ever before. We can connect around the globe and put our brains together. There’s no sci-fi-like brain scanner needed to do that. We’re actually using the same old mechanism invented by Gutenberg hundreds of years ago. We’re communicating in high efficiency by writing something once which can then be read infinite times. Emails are one-to-one communication. They’re already good at that. Blogs, more importantly, are a one-to-many channel. If you follow the blogspace, you can see that blogspace itself is often acting like a giant brain; rehashing ideas, mashing inventions, digesting thoughts, spitting them out, or building upon them.

Who do we do it for? In the land of science fiction, we’re all doing it for an evil entity. Our brain power has been enslaved to feed an AI. In the land of science fiction, we might escape such AI by a revolution... but in the real world, we’re delivering our energy voluntary. At least, we’re doing so now.

Consider these words by Eliezer from his article – written when he was 16:

“It began four million years ago, when brain volumes began climbing rapidly in the hominid line.

Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.

In less than thirty years, it will end.”

[Thanks Siggi Becker, Eliezer Yudkowsky.]


Blog  |  Forum     more >> Archive | Feed | Google's blogs | About


This site unofficially covers Google™ and more with some rights reserved. Join our forum!