Sunday, June 27, 2010

Risks from computers like Terminator II

The Lifeboat Foundation runs a discussion list on extinction threats to the human species.
Recently we had an interesting discussion about whether computers could become a threat to human survival, as in the Terminator movies, but it appears that some computer system blocked my final reply. So here it is.

FIRST -- the background ==========

Ben Goertzel wrote:

Not necessarily --- we could potentially make an AGI that was stable by design,
using a design quite different from those of naturally intelligent systems...

I asked:

Could you say more about the basis of this hope?

=========== SECOND, WHAT THE COMPUTER BLOCKED:

Ben wrote:

.... a fairly specific AGI architecture (GOLEM) together with a partly-formalized argument for its stability, even if it does increase its intelligence progressively

http://goertzel.org/GOLEM.pdf


If you restrict it from increasing its intelligence significantly then the odds of stability seem even higher...

The downside of such designs is their high computational cost.... There is unsurprisingly a tradeoff btw safety and resource-efficiency, in the domain of AGI design as in some other areas

On a quick scan, the architecture is based on trying to make the “goal” of “beneficialness” a persistent goal.

Please forgive – but many years ago I discussed with my daughter (now a PhD researcher) the possibility of building intelligent systems which learn to maximize some measure of the happiness of humans, to be calculated by a motivational subsystem mostly untouched by the intelligence part. She called it “the happy computer” when she was young. The Japanese furry seal pup robot is based on a somewhat similar principle, using a form of reinforcement learning with backpropagation to learn to maximize the strokes it gets from humans. One might characterize such systems as ABSOLUTELY HARD-WIRING beneficialness as ultimate goal, with no instability at all in that particular respect. (One can certainly still mimic nature in setting up such a brain. Western human intellectuals do often overestimate the autonomy of their own motivational systems.)

But…

“I Robot” is not so bad as openers for evaluating this. (And it’s funny that in the Asimov story human experts were also quite confident that there was no risk. Asimov was realistic about the sociology of most humans.) There is one obvious mode of instability – as robots learn that human autonomy seems to risk human survival, and they get involved. Perhaps some on the list would regard the movie as having an unhappy ending, when the robot is prevented from its legitimate beneficial goal of ensuring human survival… In the story, the robot remains committed to beneficialness, but that’s not enough.

But there are other modes of instability.

The system still must operationalize its measure of human happiness. How? By watching for smiles as it looks at you through the webcam? By responding to jolts of pleasure or pain you could send it via your keyboard? Even humans sometimes get hopelessly short-circuited when things like cocaine or heroin jam their motivational systems, but the situation here would be far worse and far less stable. Don’t complain about Fox News (and people who pander to it) or sex and violence on TV (and people who pander to that, on TYV and in life), without considering how such things could be multiplied a thousand fold.

I do tend to hope that a very constricted domain like electric power grid management would be safer – would be a kind of “embodiment” without such dangers of short circuits. But in dark moments I wonder (not knowing) – is “Terminator” the future we get if we put real intelligence into space-based theater control, and is “Matrix” the far more benign thing we could get from intelligent grids taken too far or done the wrong way?

Best of luck to us all,

Paul

========== THIRD: An AMUSING FOLLOW-ON

In addition to these more serious scenarios to worry about, there is a more amusing one to worry about.

Rumor has it that there may be a major new world push for "robots as companions."

When I first heard of this, it reminded me of the folks at MIT who in the 60's promised NASA a truly human-like AI to include on NASA's giant robot to land on Mars in the mid-1980's... and a promise around the time of Poindexter to supply the Pentagon with machines that "pretty much read people's minds." Building an artificial mouse brain is enough of a challenge for the next few decades. What kind of a companion would it be that can't even talk to you?

But then I thought about that Japanese seal pup robot. What happens if we push to develop the kind of subsymbolic intelligence that simply moves around in whatever way it takes to maximize the tactile feedback it gets from humans? That the market is most ready for? It doesn't take much knowledge of humans and of biology to see where this could be leading us. Will the guy who invested the Internet be using soft and supple robots for his midlife crisis? There are certain scenes in the movie AI...

But if we imagine what happens when there are millions upon millions of such all robots, all designed to maximize the most extreme tactile feedback from human males... and, OK, a symmetric version too... it could be an odd way to go. It sounds benign, but one should never underestimate how far an intelligent system can go when it is highly intelligent and creative and focused on maximizing just one thing... and is somewhat solipsistic at the present state of design.

Will we be asked to solicit proposals of this kind in the coming year? One never knows, in Washington especially...