Thursday, November 9, 2017

The danger is IOT, not just AI. More on risk of human extinction and what we could do.

This was sent to the Lifeboat list today:

Since we should ALWAYS keep remembering the basic mission of this list
-- to address threats of extinction of human species, let me first
repeat my view that there are four really big "final" threats of
species extinction for the next few millennia, whose importance dwarfs
all else:

1. Catastrophic climate change, especially the H2S threat, the main
cause of actual mass extinctions in life on earth in the past (per
Peter Ward, Under a Green Sky)

2. Misuse of nuclear technology, most obviously through nuclear war or
terrorism, but perhaps through other channels as well

3. Misuse of information technology, which includes BUT IS NOT limited
to the "Terminator scenarios"

4. Misuse of biotechnology, an area I do not know as well but which I
hear about from time to time.

Because I am scheduled to give a plenary talk next week at the ICONIP
conference on the future of deep learning and brain research, I have
decided this week to face up more completely to the very sticky issues
associated with threat number 3.

It is extremely important that AI as such IS NOT the whole threat.
That became starkly apparent to me 17 years ago, from discussions of
leading roboticists at a workshop I organized for NSF and NASA in the
year 2000:

https://www.nsf.gov/awardsearch/showAward?AWD_ID=0075691

There was great excitement by then about the "minimal payload needed
to extract materials from the moon for use in the economy (including
earth orbit)." The obvious way to minimize payload would be to develop
and land self-replicating robots, which would bootstrap lunar
materials and spread to exploit the entire planet without human
involvement. But it was clear that we should expect Darwinian
evolution in such a large-scale system. The vision of an entire planet
infested by metallic cockroaches... well, they don't need an IQ of 200
to be a threat to humans.

A better formulation of the threat at hand, in my view, is given at:

http://drpauljohn.blogspot.com/2017/10/questions-needing-addressing-as-iot.html

building on the six slides at www.werbos.com/IT_big_picture.pdf.

The key point is that the Internet of Things (IOT) **IS** on course to
taking over the entire world, VERY QUICKLY, and that this entails
really serious risks as well as opportunities, of which rogue AI is
only one. And many of the people hoping to gain money or power via IOT
are uninhibited or unscrupulous in ways which should make us worried,
here and now. Some of the worst risks are for events which could
happen within the year!!

A key slide simply depicts a binary choice (OK, oversimplified a bit)
between a disastrous path, like the one we are on now regardless of
lots of lip service about human-centered internet, versus what we
could achieve if we worked very consciously to develop a new kind of
platform, at least, and take other related steps.

The positive side is SO hard, and requires SO MUCH in the way of new
efforts and thinking, that it really does depress me at times,
especially as I see more deeply into many of the major political
players in the world today, and possibilities like that of my own home
in Virginia being blotted out if we ultimately do nothing about North
Korea's plans (an illustration of how well the world cannot work
together even for threats which a child could understand). I am not
surprised that folks like the Third Caliphate movement hope to just
knock out our technology, and create a stabler world their way. It was
also depressing this week to read the last 20% of Dan Brown's new
novel, Origin, which basically concludes "Borgs are us. You WILL be
assimilated. Resistance is futile." (Yes, misuse of brain-computer
interface BCI is VERY much on the horizon right now, as big medical
companies lick their jobs and exert power to rush things through FDA,
and military folks actually write about the benefits of better
controlled soldiers.)

The decision tree here reminds me a bit of the first major decision
tree, in Von Neumann's Theory of Games and Economic Behavior, where we
are asked to choose between certainty of hamburger and 50-50 starve or
steak. (Again, please forgive simplification.) Given a 50-50 choice of
great IT versus spiritual and material extinction, or certainty of
sharia, which would we prefer?

But as I reflect on this, I do not really think that stability is a
choice anymore. For example, both sharia and the Christian equivalents
force nonsustainable population growth (already a HUGE threat in the
Middle East, where unemployed youth may well cause beheading of ALL
the folks in power in coming decades). In a world with nuclear weapons
and other such capabilities, decay into war would simply near-certain
extinction, in my view. Interdependency has grown far beyond what most
people seem to know about, such that it really is extinction, not loss
of half the population, that would be implied.

For those of us who believe there is SOME kind of higher intelligence on earth
(see http://drpauljohn.blogspot.com/2017/11/international-debate-on-what-is-god-and.html
for several serious views), maybe we might expect SOME kind of good
luck or break in trying to avoid the very worst development of the
IOT. But for Dan Brown's "clones R us" scenario, what is our basis for
avoiding that? Hope, only hope. But regardless of whether he is right
or not, it seems we have no choice but accept that IOT WILL take over
the earth, and the best we can do is fight to put spin on the
development in a way which avoids the worst.

============================

So what is needed to do that?

In my view, the first and most urgent step is still simply to develop
openly verified, internationally used unbreakable operating systems,
as described in www.werbos.com/NATO_terrorism.pdf.

Next, to go with that, quantum communication systems openly designed
to extend the unbreakability to the entire network of hard servers
worldwide. (IBM has such a network, but not yet unbreakable or open or
shated as much as needed.)

And then perhaps an international agreement on a distributed ledger
system simply to define currency holdings, for a diversity of
currencies controlled by their primary owners, to run on the secure
servers, to provide extra security and scaleability.

And then, most crucial, an Integrated Market Platform (IMP) to run on
those servers with that operating system, to perform transactions of
transparently designed securities AND all objects to be controlled by
the "internet of things" (IOT), PLUS standards for qualifying apps to
run in distributed instances of the unbreakable operating system
guaranteeing that the owner of an object has full control of it.

I remember someone who thought this was an academic issue, until he
learned how many things in his house already are not under his
control.

AI should ONLY be used for decisions too fast for humans to make them
to acceptable levels of performance. Self-driving cars should not be
part of it, in my view. Minimal real value to society, huge risks.
(Folks serious about the AI threat should also watch Terminator 3!).

But I must run now. More later.

====================================================================
===============================================================

This was a slightly cleaner if less detailed version of some thoughts on the same subject a few days earlier:

Discussions which .... got started have made me wonder more and more: how could we design a new Integrated Market Platform (IMP) , a software system to include an operating system, which would really be sustainable in the face of all the incredible security AND OTHER challenges (like risks of Terminator II or death by Artificial Stupidity) which are growing rapidly now? And yes, for trading financial instruments as part of it, which Frank and his friend Tom said might be more virgin  open territory than health care is.

I don't have a clear enough answer, even now, but a couple of weeks ago I posted a list of some key requirements and questions:


I am intrigued that someone may have gone to MIT, which provided a much more beautiful and literature response and explanation for one part of one of the four questions, related to the AI aspect:

On a very quick scan, he does seem to have understood one piece which I have taken for granted: the use of neural network types of design,  not just as a local black box, but as a system design. But the truth is that this guy at MIT, much brighter than most people, is still not one of the developers of that type of design, and the details really matter a lot when you want a system to work. I am. In fact, the book I cited in my post, be Lewis and Liu, begins with chapter one by me giving a thorough review of what is now known about this type of technology, which goes FAR beyond the simple designs which Google used for its Go playing machine (still intelligent enough to learn how to beat any human player, following an approach to Go which I published long before the Google folks got involved). 

One of the key elements of that technology, which I didn't notice in his post the first time around, is the central crucial role of foresight, and of asset valuation based on foresight.,

This morning, I woke with an idea for a place to start in trying to make sense of this incredibly difficult design challenge: interest rates.

Many people think that interest rates, like operating systems, are simple things not really important to the design of software for exchanging financial instruments or commodities or control or information content. This is a great mistake.

The simple issue of interest rates and risk premiums for commodities like mortgages is a very deep issue, beyond the simplified version of efficient market theories as known to formal economics. (For example, people tell me that the new Dynamic Stochastic General Equilibrium Models still assume that everyone has the same ;probability distribution.) I have looked somewhat into what caused the great market collapse of 2008, and would claim that treatment of Correlated Risk was one of the major factors. Certainly mortgage defaults were the trigger, and human tendencies to lie and steal when they can get away with it, were also crucial. But at the end of the day a few points emerge:

1. If we develop a new ironclad IMP, secure from the worst meddling of governments, what happens to it **IF AND WHEN** such an instability occurs and government meddling, the historical deus ex machina, is no longer there?

2. The risk estimates of 2008 were based on a combination of machine learning used to assess risk. (I know the guy who developed the first version, Hecht-Nielsen, still in touch with me, and then the later "improved versions" due to Vapnik which were more marketable to the Wall Street/Oprah crowd but less mathematically solid for the application). But the assumption was made in packaging them together, and in using the risk estimates, that the risks were INDEPENDENT, not statistically correlated between one mortgage and another. BUT THEY WEREN'T INDEPENDENT. At the critical time in late 2008, I saw many ordinary people interviewed on TV who said: "If gasoline prices keep rising as fast as they are doing now, I will have to default on my bills, maybe my mortgage." So many of them did -- not most, but enough to screw up the statistical assumptions, and pout the whole world into default. The issue has always been with us "Who insures against system-wide risks?" But in any case, risk assignment and types of risk are absolutely central here.

Two VERY TENTATIVE new thoughts:

On the technical design, maybe it is crucial to think in terms of three software layers, bound together but distinct, holding off for now on the content providing part:

(1) The enabling platform, operating system and communications system, where the hardest security and AI would be unlimited, on a global network of central servers;

(2) the trading level, where all foresight decisions are made by humans, living on that platform;

(3) the physical control (IOT) system made of certified market-compliant apps, still using the same operating system and communication system and key variable definitions (like current prices, but more) but running remotely. (Luda reminds that Ganesh Venayagamoorthy has some recent patents on technology for the electric power grid, which should ultimately be integrated into this kind of system but which is also a source of relevant technology). 

On a larger scale, Luda reminds me that we really need to take account of other human dimensions, as in:


I have not assessed that yet, but certainly any new global computer transactions system must take account of some of the most basic values of key players in the Middle East, which may be just as well. It may not be possible to preserve the narrowest ethno-centric values (if such are present in this new statement, I don't know) but it may actually HELP that broader, more universal concerns such as not killing humans with usury may be REQUIRED as part of broad adoption. 

But EXACTLY HOW? That is part of the design challenge. 

==================

Why no AI (let alone advanced quantum AI) in the trading level? That requires further thought and discussion, and maybe a little tweaking. Human politics in the world today is very discouraging, showing more signs of destructive pathological mundane groupthink and identity politics all over the world than of the true collective intelligence which I think is possible, which I have seen in the best-run NSF review panels and which Doug may have some thoughts about too. But use of AI to replace humans in that critical function entails MANY dangers, not least of which are dangers related to the unavoidable need to specify utility functions governing the AIs. Who gets to vote? But of course, who is to keep individual traders using the system from using AI in ways that could be risky? Again, more thought is needed, to prevent really serious systemic risks. A few kludges have been sold in recent years to restrain the worst rule by twitch traders, but we need something more principled and resilient. 

(Added later: if we just use "vector intelligence" in the trading level for the highest speed transactions, it should be OK. PARALLEL efforts to develop better human dialogue are important but not part of IMP design as such.)

============ EXAMPLE OF CORRELATED RISK
Meanwhile, the issue of correlated risk is not just an academic one. What happens even to a perfect IMP if H2S goes nuts and all humans die? There was yet another piece of bad news on the climate front yesterday:


Please forgive if I copy over a few paragraphs of what I sent the Lifeboat Foundation on this yesterday, updating previous analysis in the second part of www.werbos.com/Atacama.pdf

=========================
It does look (in the absence of that more precise research) that we
are well on course to mass death of humans, with little chance of
global GHG programs cooling the antarctic in time to reverse that (or
the sea level rise which Hansen once called "worst case" now
underway).

A few months ago, I started saying: "It may be time NOW to start
implementing Teller's geoengineering plan." So I simply did what I
should have long ago, googling Caldeira's latest work on that subject.
It is more depressing than I thought. The materials HE is pushing are
sulfates, the very worst materials to start pumping into the oceans
when they MIGHT be crucial to H2S archaea proliferation. (That makes
the aquarium level research all the more essential, and the need for
research to consider other aerosols.) Even worse, it works worse at
the poles, where we need it.

Sadly, what's really needed is a major international research program
aimed at deeper strategic push/study on OTHER hopes for
geoengineering... with or without US...  Maybe SSP is not our best
venue now for low cost launch R&D, among others, though we WOULD need




No comments:

Post a Comment