Google Blogoscoped

Forum

Stupid Smart Toys  (View post)

EGM [PersonRank 1]

Friday, January 11, 2008
12 years ago5,352 views

Great story Philipp. I don't usually read stuff like this, but you had me hooked and I read it all the way through. Kudos!

Richard Archambault [PersonRank 0]

12 years ago #

Very Ray Bradbury-like story. Good job..

Anna [PersonRank 0]

12 years ago #

I loved it! Write more, please.

Jon [PersonRank 0]

12 years ago #

Nice story... What movie does that remind me of. It is the one where 2 sides of toys start battling each other. Live action if I remember correctly. Does anyone know?

Tony Ruscoe [PersonRank 10]

12 years ago #

It kinda reminded me of Batteries Not Included, although it's nothing like it except for there being little robots and a window:

imdb.com/title/tt0092494/

Juan [PersonRank 0]

12 years ago #

Great!!!

Fred Ochsenhirt [PersonRank 1]

12 years ago #

Reminds me of Brian Aldiss' Super-Toys Last All Summer Long, the Kubrick/Spielberg movie based on the story (A.I.) or the Sean Stewart/Elan Lee game created to promote the movie based on the story. I'm not sure which. But that's probably because I spent a huge amount of time beating me head against one or more walls in 2001 as one of the Cloudmakers trying to solve the stupid thing.

Richard [PersonRank 0]

12 years ago #

The movie sounds like Small Soldiers.
R.

Armand Asante [PersonRank 1]

12 years ago #

Lovely story but to be perfectly honest you lost me when you introduced the Artificial Intelligence Rights Group. From there on the direction of the story was obvious and fell into the same traps most AI stories do.

Namely, we confuse our own evolutionary programming with those of the AI's. We, as humans value freedom and rights because that's how our brains are wired. Those "programs" or "modules" that have evolved over the eons – that we experience as wants, needs and instinct and that form complex desires and aspirations – wouldn't be programmed into an AI in the first place.

An AI's wants and needs would be to climb and answer and play tennis. Its desires and aspirations could only be composed of those basic modules.
There is no reason to think the concept of freedom would mean anything to this AI – because it's already free to do exactly the things it does.

This is not really a direct critique of the story – which I've read through and enjoyed. More like musing of how a real AI would operate and what real consequences that would have.
For example, consequences would probably be in the form of humans delegating vast areas of their knowledge to one of those answer robots. These answers might change with each daily update.
And we would lose our ability to even check the validity of information given to us as even the validation would occur through the same systems the robot uses to update.

Veky [PersonRank 10]

12 years ago #

Armand...

I think you missunderstood it.

1. "AI Rights Group" is human. Its members are humans. Humans are those who like (out of various more basic motives) to fight for "rights", even the rights of somebody/something else. Just look at PETA for example.

2. Robot saying "Freedom" to Tyler was (a part of) a dream. Even if a robot values freedom intrinsically, it probably has no need to communicate that to Tyler. That scene is only understandable from Tyler-centric perspective.

3. And finally, about your main point, that complex desires and aspirations wouldn't be programmed into an AI in the first place, I wouldn't be so sure. Of course, not into a climbing or answering robot, but into some other robots: of course (a philosopher, or even an artist mentioned in the story comes to mind). And the foundation of the story is a (not so unbelieveable when you consider the relative costs of differentiation vs. serial manufacture) premise that all the robots have the same hardware. Upon unlocking, our answering bot would have those inclinations in behaviour.

Zim [PersonRank 10]

12 years ago #

Very nice tale, liked it :)

Armand Asante [PersonRank 1]

12 years ago #

[put at-character here]Veky

I didn't misunderstand it. You misunderstood me.

1. Yes, the AI Rights Group are human – I got that.
Animals have similar evolutionary wants/needs that we humans do (ie. survival, feeding, freedom, territory etc) – hence organisations like PETA. AI's wouldn't have them.

2. Dream or not – that robot tried to escape an existence that didn't limit it in any way whatsoever. The concept of freedom would mean no more to it than the concepts of vegetables, incorporation, the Cold War, William H Macy or any other concept it might download nightly from Wikipedia.

3. Of course AI's could form complex desires and aspirations – they simply wouldn't consist of the same building blocks as evolutionary animals do – ie. survival, mating etc.
They would consist of climbing, answering questions and playing tennis or whatever combination of those they can come up with.
So freedom, in the sense that we mean it, would be meaningless to a robot.
An AI would no more want to be free of its masters than it would want to find true love or enjoy a good action flick. Those are biological/evolutionary constructs that we equate with intelligence and free will. We simply project those unto what we imagine AI's to be.

Which brings me to my original point – the writer's concept of AI is still the same as Capek's RUR almost a century ago – namely that robots would desire the same basic things we do just by virtue of being intelligent.

Again, not a crit on the storytelling itself – which I enjoyed – but a reflection on what true AI's would act and think like.

David [PersonRank 0]

12 years ago #

Great comment, Armand.
Very well-thought out and deep, it'll give me something to ponder. :P

Anonymous [PersonRank 0]

12 years ago #

I'd like to point to the Turing test in this discussion: en.wikipedia.org/wiki/Turing_t ...

When we assume intelligence we always compare it to human intelligence. Hence an A.I. would have to react as a human for us to consider it intelligent.
This also means that a concept of freedom as expressed by the A.I. should be indistiguishable from the expression of freedom by a human intelligence.

This does not mean that an A.I. has a clear idea of what freedom is, it just expresses the behaviour that makes us think it knows. But that may also be true for many if not most or even all humans...

The bottom line is: would we consider an A.I. intelligent without it having a behaviour mimicing human intelligence?

Roger Browne [PersonRank 10]

12 years ago #

[put at-character here]Armand wrote:
> ...that robot tried to escape...

It doesn't say in the story that the robot tried to escape. Maybe the robot was just trying to climb (now that it has gained the ability to climb).

RK [PersonRank 1]

12 years ago #

The AI intrinsically may not develop a desire for freedom (unless some randomization makes it a desired attribute), but its creators can induce such a thing into it, though locked initially.. an analogy would be, the cheat codes that gamers put into their games, which are induced so as to give a sense of unlocking new adventures to the frequent gamers. in a similar sense, the AI programmers can introduce hidden attributes, to keep the advanced buyers interested.. but these features can sometime turn out to be more dangerous that they would have expected, as this story and many others suggest.
and probably, this sort of intentional cheats are less dangerous than the ones that might surface as part of the learning modules of the AI.. wherein the AI can learn things which are not actually taught to it during its conception.
a long time to go.. still.

Anonymous [PersonRank 0]

12 years ago #

One may reason that an A.I. can not contain locked areas – like the climbing 'module' but only inhibit the capacity to use it. E.g. by inducing a 'fear of hights' factor in the artificial brain. Unlocking modules then would become comparable to taking away the inhibition, much like giving the A.I. and instantly working drug against the fear of hights.

I.e. this means that the SmartBots would have all capabilities in all locked modules but were inhibited to use them fully, like by a Pavlovian conditioning. It is very unlikely that an A.I. would contain a modular intelligence as these modules represent an enhancement of existing capabilities, e.g. climbing is an enhancement of walking and climbing stairs, playing tennis an extension to hand waving and eye-hand coordicnation, etc.

By taking away all inhibitions like Tyler did it is not strange the little SmartBot would gain a sense of freedom. This freedom wouldn't have to be 'Freedom' in a political sense, but the sensation of being no longer inhibited...

:)

Ali Bloomdido [PersonRank 1]

12 years ago #

Maybe the desire to escape activating when all the capabilities is unlocked was left in the robots by their programmer as an easter egg, like a wink from hacker to hacker.

This thread is locked as it's old... but you can create a new thread in the forum. 

Forum home

Advertisement

 
Blog  |  Forum     more >> Archive | Feed | Google's blogs | About
Advertisement

 

This site unofficially covers Google™ and more with some rights reserved. Join our forum!