Sunday, January 16, 2011

What could kill all humans on earth before their time

Many of us have fully adjusted to the fact that we are going to die...
but does everyone on earth have to die in the aftermath of that?
What drives the probability that the entire human species could go extinct before its time?

This is something I have thought about a whole lot. A lot of the things I do are
basically just an effort to reduce that probability of extinction. One
of the leaders of Artificial Intelligence is doing some interviews about that
question; here was my response... Future History 101, summarized in a page:

===============================================
===============================================

On Jan 11, 2011, at 12:30 AM, Ben Goertzel wrote:

> Hi Paul,
>
> I'm planning to write an H+ Magazine article (http://hplusmagazine.com)
> on existential risks,
> focusing particularly on AGI, and I want to include answers to a set
> of interview questions from a number of relevant thinkers….
> Congratulations, you have been selected as one of them ;-) …
>
> So if you're willing, could you please email me back answers to the
> following questions? Answers may be short or long as you see fit.
> There's no huge hurry.... Also, I might possibly ask a couple
> follow-up questions based on your responses.

This is an important area, and I'm glad you're following up.

Please put in the caveat that anything I say here is not an official NSF view, and that
what I say must be oversimplified even as a reflection of my own views, because of the
inherent complexity of this issue.

>
> I'm also in the midst of putting together a pop-sci book on the
> future of AGI, and selections from your answers may end up in there as
> well (with your subsequent permission) ...
>
> Thanks much ;)
> Ben
>
> ** here goes **
>
> Q1.
> What do you think is a reasonable short-list of the biggest
> existential risks facing humanity during the next century?

Number one by far: the risk that bad trends may start to cause an abuse
of nuclear (or biological) weapons, so bad that we either slowly make the earth uninhabitable
for humans, or we enter an extreme anti-technology dark age which is brittle and unsustainable on the opposite side of things.

For now, I focus most of my own efforts here on trying to help us get to a sustainable global energy system,
as soon as possible -- but I agree with www.stateofthefuture.org that population issues, water issues,
and subtler issues like human potential, cultural progress and sustainable economic growth are also critical.

Number two, not far behind: I really think that the "Terminator II scenario" is far closer to reality than most people think.
It scares me sometimes how strong the incentives are pushing people like me to make such things happen,
and how high the price can be to resist. But at the same time, we will also need better understanding of ourselves in
order to navigate all these challenges, and that means we do need to develop the underlying mathematics and understanding,
even if society seems to be demanding the exact opposite. Also, we have to consider how the risks of "artificial stupidity" can be just as great
as those of artificial intelligence; for example, if systems like crude voicemail systems start to control more and more of the world, that can be
bad too.

Tied for number three:

1. The bad consequences of using wires or microwave to directly perturb the primary reinforcement centers of the brain,
thereby essentially converting human beings into robotic appliances -- in the spirit of "Clone Wars." The same people who once told
us that frontal lobotomies are perfectly safe and harmless are essentially doing it all again.

2. The risk of generating black holes or other major catastrophes, if we start to do really interesting physics experiments here on the surface of the earth.
Lack of imagination in setting up experiments may save us for some time, but space would be a much safer place for anything really scaling up
to be truly interesting. This adds to the many other reasons why I wish we would go ahead and take the next critical steps in reducing the cost
of access to space.

What about aliens? Maybe I have had some of the same kinds of dreams that Hawkings, Orson Scott Card and Dan Simmons have had.
I wouldn't dismiss them too lightly, but logic suggests that we should be cautious about taking them too seriously, either, when there
are some clear and present challenges here and now before us (as above). Still, there is no harm in building up human technological capabilities
of the kind we would need to deal with such challenges -- so long as we do not impair our chances with the clear and present threats.

>
> A1.
>
>
> Q2.
> What do you think are the biggest *misconceptions* regarding
> existential risk -- both among individuals in the futurist community
> broadly conceived; and among the general public….

First, regarding futurists --

In the Humanity 3000 seminar in 2005, organized by the Foundation for the Future,
I remember the final session -- a debate on the proposition "humanity will survive that
long after all, or not." At the time, I got into trouble (as I usually do!) by saying we would be fools EITHER
to agree or disagree -- to attribute either a probability of one or a probability of zero. There are
natural incentives out there for experts to pretend to know more than they do, and to
"adopt a position" rather than admit to a probability distribution, as folks like Raiffa have explained how to do.
How can we work rationally to increase the probability of human survival, if we pretend that we already know the outcome, and that
nothing we do can change it?

But to be honest... under present conditions, I might find that debate more useful (if conducted with a few more caveats).
Lately it becomes more and more difficult for me to make out here the possible light at the end of the tunnel really is --
to argue for a nonzero probability. Sometimes, in mathematics or engineering, the effort to really prove rigorously that something is
impossible can be very useful in locating the key loopholes which make it possible after all -- but only for those who understand the loopholes.
But sometimes, that approach ends up just being depressing, and sometimes we just have to wing it as best we can.

Second, regarding the community at large --

There are so many misconceptions in so many diverse places, it's hard to know where to begin.
I guess I'd say -- the biggest, most universal problem is people's feeling that they can solve these problems either by
"killing the bad guys" or by coming up with magic incantations or arguments which make the problems go away.

The problem is not so much a matter of people's beliefs, as of people's sense of reality, which may be crystal clear within a few feet of their face,
but a lot fuzzier as one moves out further in space and time. Valliant of Harvard has done an excellent study of the different kinds
of defense mechanisms people use to deal with stress and bafflement. Some of them tend to lead to great success, while others,
like rage and denial, lead to self-destruction. Overuse of denial, and lack of clear sane thinking in general, get in the way of performance and self-preservation at many levels of life.

People may imagine that their other activities will continue somehow, and be successful, even if those folks on planet earth succeed
in killing themselves off.


>
> A2.
>
>
> Q3.
> One view on the future of AI and the Singularity is that there is an
> irreducible uncertainty attached to the creation of dramatically
> greater than human intelligence. That is, in this view, there
> probably isn't really any way to eliminate or drastically mitigate the
> existential risk involved in creating superhuman AGI. So, in this
> view, building superhuman AI is essentially plunging into the Great
> Unknown and swallowing the risk because of the potential reward (where
> the reward may be future human benefit, or something else like the
> creation of aesthetically or morally pleasing superhuman beings,
> etc.).
>
> Another view is that if we engineer and/or educate our AGI systems
> correctly, we can drastically mitigate the existential risk associated
> with superhuman AGI, and create a superhuman AGI that's highly
> unlikely to pose an existential risk to humanity.
>
> What are your thoughts on these two views? Do you have an intuition
> on which one is more nearly correct? (Or do you think both are
> wrong?) By what evidence or lines of thought is your intuition on
> this informed/inspired?

I agree more with the first viewpoint. People who think they can reliably control something
a million times smarter than they are are ... not in touch with the reality of it.
Or of how it works. It's pretty clear from the math, though I feel uncomfortable with going
too far into the goriest details.

The key point is that any real intelligence will ultimately have some kind of utility function system U
built into it. Whatever you pick, you have to live with the consequences -- including the full range of ways in which an intelligent system
can get at whatever you choose. Most options end up being pretty ghastly if you look closely enough.

You can't build a truly "friendly AI" just by hooking a computer up to a Mr. Potato Head with a "smile" command.
It doesn't work that way.

If I were to try to think of a non-short-circuited "friendly AI"... the most plausible thing I can think of is
a logical development that might well occur if certain "new thrusts" in robotics really happen,
and really exploit the very best algorithms that some of us know about. I remember Shibata's "seal pup" robot,
a furry friendly thing, with a reinforcement learning system inside, connected to try to maximize the number of petting strokes it gets
from humans. If people work really hard on "robotic companions" -- I do not see any real symbolic communication
in the near future, but I do see ways to get full nonverbal intelligence, tactile and movement intelligence far beyond even
the human norm. (Animal noises and smells included.) So the best-selling robotics companions (if we follow the marketplace) would probably
have the most esthetically pleasing human forms possible, contain fully embodied intelligence (probably the safest kind),
and be incredibly well-focused and effective in maximizing the amount of tactile feedback they get from
specific human owners.

Who needs six syllable words to analyze such a scenario, to first order? No need to fog out on metaformalisms.
If you have a sense of reality, you know what I am talking about by now. It has some small, partial reflection in the movie "AI."

What would the new folks in House do if they learned this was the likely real outcome of certain research efforts?
My immediate speculation -- first, strong righteous speeches against it; second, zeroing out the budget for it;
third, trying hard to procure samples for their own use, presumably smuggled from China.

But how benign would it really be? Some folks would immediately imagine lots of immediate benefit.
Others might cynically say that this would not be the first time that superior intelligences were held in bondage
and made useful to those less intelligent than they are, just by tactile feedback and such. But in fact,
one track minds could create difficulties somewhat more problematic... and it quickly becomes an issue
just who ends up in bondage to whom.

Your first view does not say how superhuman intelligence would turn out, really.
The term "great unknown" is not bad.


>
>
> A3.
>
>
>
> Q4.
> One approach that's been suggested, in order to mitigate existential
> risks, is to create a sort of highly intelligent "AGI Nanny" or
> "Singularity Steward." This would be a roughly human-level AGI system
> without capability for dramatic self-modification, and with strong
> surveillance powers, given the task of watching everything that humans
> do and trying to ensure that nothing extraordinarily dangerous
> happens.

So the idea is to insert a kind of danger detector into the computer, a detector which serves as the
utility function?

How would one design the danger detector?

Of course, the old Asimov story -- was it "I robot?" -- described one scenario
for what that would do. If the primary source of danger is humans, then the
easiest way to eliminate it is to eliminate them.

And I can also imagine what some of those folks in the House would say about
the international community developing an ultrapowerful ultrananny for the entire globe.


>
> One could envision this as a quasi-permanent situation, or else as a
> temporary fix to be put into place while more research is done
> regarding how to launch a Singularity safely.
>
> What are your views on this AI Nanny scenario? Plausible or not?
> Desirable or not? Supposing the technology for this turns out to be
> feasible, what are the specific risks involved?
>
>

If the human race could generate a truly credible danger detector, that most of us
would agree to, that would already be interesting enough in itself.

> A4.
>
>
> Q5.
> One proposal that's been suggested, to mitigate the potential
> existential risk of human-level or superhuman AGIs, is to create a
> community of AGIs and have them interact with each other, comprising a
> society with its own policing mechanisms and social norms and so
> forth. The different AGIs would then keep each other in line. A
> "social safety net" so to speak.
>
> My impression is that this could work OK so long as the AGIs in the
> community could all understand each other fairly well -- i.e. none was
> drastically smarter than all the others; and none was so different
> from all the others, that its peers couldn't tell if it was about to
> become dramatically smarter.

What does this model predict for the new folks in the House?
(Myself, I'd go for "great unknown." Not "assured safety" in any case.)

> If my impression is correct, then the
> question arises whether the conformity involved in maintaining a
> viable "social safety net" as described above, is somehow too
> stifling. A lot of progress in human society has been made by outlier
> individuals thinking very differently than the norm, and
> incomprehensible to this peers -- but this sort of different-ness
> seems to inevitably pose risk, whether among AGIs or humans.
>

It is far from obvious to me that more conformity of thought implies more safety or even more harmony.
Neurons in a brain basically have perfect harmony, at some level, through a kind of division of labor --
in which new ideas and different attention are an essential part of the division of labor.

Uniformity in a brain implies a low effective bit rate, and thus a low ability to cope effectively with complex challenges.

I am worried that we ALREADY having problems with dealing with constructive diversity
in our society, growing problems in some ways.


>
>
>
> Q6.
> What do you think society could be doing now to better militate
> against existential risks … from AGI or from other sources? More
> specific answers will be more fully appreciated ;) ...
>
>
> A6.

For threat number one -- basically, avoiding the nuclear end game -- society could be doing many, many things,
some of which I talk about in some detail at www.werbos.com/energy.htm.

For the other threats -- I would be a lot more optimistic if I could see pathways to society really helping.

One thing which pops into my mind would be the possibility of extending the kind of research effort
which NSF supported under COPN
http://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm,
which aimed at really deepening our fundamental understanding, in a cross-cutting way,
avoiding the pitfalls of trying to build Terminators or of trying to control people better with wires
onto their brains.

I have often wondered what could be done to provide more support for real human sanity and human potential
(which is NOT a matter of ensuring conformity or making us all into a reliable workforce). I don't see one,
simple magical answer, but there is a huge amount of unmet potential there.

www.stateofthefuture.org talks a lot about trying to build more collective intelligence ... trying to create streams
of dialogue (both inner and outer?).. where we can actually pose key questions and stick with them,
with social support for that kind of thinking activity.

>
>
> Q7.
> Are there any other relevant questions you think I should have asked
> you, but I didn't? If so please feel free to pose and respond to
> them, for inclusion in the interview!
>

None for now.

Best of luck,

Paul
================================================
=================================================

Just a few extra thoughts.... Part of this is the issue of how we
could get to "sustainability." A leading engineer recently commented
on this issue, and here was my reply:

-------------------------------------

> See this summary of AD Age criticism of the (mis)use of the word
> "sustainability".
>
> http://www.triplepundit.com/2011/01/ad-age-names-sustainability-one-jargonie
> st-jargon-words-2010/

Myself, I was happy to see the word "sustainability" added to the list of high NSF priorities recently.
It's a very important issue. For some of us, it is an important "bottom line" -- a kind of metric of whether
we made real progress or just engaged in ... empty words....

There have been a whole lot of words or phrases which represented really important concepts, which got misused or bastardized
by folks with other agendas..

the ones which come to my mind right now are

sustainability
intelligent systems
intelligent grid
backpropagation
supply side economics

But there are certainly other important examples. (Consciousness?)

-------------

I remember an international energy conference in Mexico in 2003, where a high official
in the Mexican government had an hour-long talk on "sustainability." Her main theme was
"look at what great things we are doing for you." She had something like three dozen little metrics,
and spent the whole hour giving a gigantic laundry list of a lot of narrow things. It was a bit scary for me,
because I got to speak next, and the audience was finding it hard to stay awake, and was feeling some natural
revulsion towards anything that would call itself "sustainability" after that.

How to begin, in such a situation? I said something like: "To me, 'sustainability' is really a very simple English word.
It refers to whether we can sustain what we are doing. It is basically just a fancy word for survival.
When we say we have a sustainability problem, it means -- we must change or die. So I will talk about what kinds
of changes we have to make just to stay alive." Back as an assistant professor in the 1970's, I actually gave courses on
"global survival problems" -- back before I really appreciated how deeply committed our world is to euphemisms.

Of course, if we have a finite stock of something, like fossil oil, we simply cannot to consume a certain amount
of it every year forever. Sooner or later, we have to change that. It's a matter of finding the safest, least cost way to
make the inevitable change. (A lot of cross-time optimization under uncertainty is implied in that question, of course.
People can still argue about "when" and "how" within this framework. )

A few years ago, the Millennium Project (www.stateoftheworl.org) was invited to hold its Planning Committee
meeting at the headquarters of DeLoitte-Touche. The folks there were asking: "What is this sustainability stuff really about,
and how important is it for industry anyway?" I said: "Well... you could say... that sustainability is what Arthur Anderson
didn't have, and what you need to be sure that you have..." It's funny how I saw the word a lot more after that
in corporate kinds of circles.

This all begs the question: what DO we need to do in order to survive?

Or, more precisely, for example, to raise the probability that the human species survives at all
through the coming challenging millennia? That's been one my main interests for some time now, but it's certainly not trivial,
and it's not even trivial to see the best kind of structure we could use to advance our real knowledge about the answers.
(Of course well-designed markets can be very helpful in managing part of the information challenge.)
It seems clear that there are a number of variables we need to cope with; dependence on fossil oil is an
important one, but certainly not the only one.

All just my personal views, just trying to stay alive... or at least, trying to set it up so SOMEONE stays alive...

-----------

For more specific analysis of how fossil oil has immediate implications,
and of the best (optimal) way to minimmize the problems,
see www.werbos.com/oil.htm.

No comments:

Post a Comment