Wednesday, February 3, 2016

killer drones: reply to guy who wants to prosecute their developers

Some international discussions have raised the question: should the international community try to prosecute those who use and develop killer drones, as they contribute to the "bad AI" or "Terminator" risk to survival of our species? 

My response: 


[This suggestion] brings me back to the very severe  "damned if you do, damned if you don't" dilemmas which have left me relatively paralyzed since July 2014. 

Some of you have long been familiar with that kind of dilemma. I remember circa 1975 when I was a new assistant professor going to a conference where a guy from the State Department was looking for people to fund on "research into defining the national interest." The basic idea was... nations play hard to win sometimes, but later on often find that "winning" was actually losing and vice-versa. It would be nice to get past that. Mathematically, a person's underlying utility function U(R), representing what they like for its own sake, may be very different from the correct value of J(R), the so-called "value function" assessing how good the present situation really is when future consequences are fully accounted for. Research on how to assess J(R) more accurately... is a serious area of research. As one special case... how do we assess progress J(R) towards reducing (or raising) the probability that the human species goes extinct within the next 10,000 years or so, if we focus for now on that particular ultimate value U(R)?

And so... when you think of a specific action like this Wiesenthal Center ... the unintended consequences get to be very, very tricky in the world we now live in.

Would the idea be to prosecute all members of the "Chair force," the folks who have jobs working for places like the air force sitting in offices in rural US operating drones by wire, much like folks spending all day on the Play Station? The other day, I read that more new pilots were being trained to be chair force pilots than pilots of actual airplanes. 

What about people actually developing the next generation of drones, particularly new more autonomous drones which do not require so much chair force, and which get to be more and more similar to the aerial drones depicted in the movie Terminator III Judgement Day?
(It was an awful movie, esthetically, but still justified for folks serious about extinction to watch.
I was both turned off and amused at times by scenes which then looked like a dog fight between Hilary Clinton and Arnold Schwartzenegger...  but Hilary is older now, and I suppose Trump has taken over a lot of the image which was Schwartzenegger. The technology descriptions were brief but specific enough to be thought-provoking.) Strictly speaking, it is the autonomous drones, not drones in general, which relate to the "end game" of human extinction by wrong AI. 

Here is where it gets to be especially tricky.

What if an idealistic group works hard to find out and inform the whole world which group really is closest to the most dangerous and powerful form of drone autonomy, which could kill all humans on earth? Should they publicize that very widely and try to stop those activities somehow?

Unfortunately, there are so many power-crazed leaders and narcissists in so many nations all over the world that the effect may be to substantially accelerate the movement in that direction, even if, yes, logic says clearly it may kill all people on earth including all the descendants of those very same leaders and the people behind them. 

Many technology people aware of the dilemmas of their jobs now are putting hope into Elon Musk's new AI center, intending to "reduce the risk of AI by making all tools universally available." Again, I do not believe that universal availability of autonomous drones would make for more peace any more than universal availability of cannons did... and these things are a lot more dangerous than cannons!!!

But the dilemma goes even further. "Sterility memes" like the Watson computer system may help avert the risk of artificial intelligence, but putting the entire physical world (aka future planned internet of things) under the control of an implacable glorified voicemail system... poses a very serious risk which I have sometimes called "artificial stupidity" (AS). It may sound funny, just as H2S reminds some folks of stinky smells ... but certain types of control systems really can crash a complex nonlinear system like the web of human economy and life.    
The daily news lately reminds me ever more of the AS kind of risk...

Best of luck,

     Paul

P.S. I have also agreed with my son that "Captain America: the Winter Soldier" is worth thinking about seriously in this connection. At www.werbos.com/Mind.htm, I post slides from a talk at SPIE last year which got into some specifics, and even a few of the technical details. 

No comments:

Post a Comment