Monday, September 13, 2021
Terrible news on out of control AI
I was ever so excited, months ago, when John Kerry and UN Secretary General Guterres called for a new climate extinction office in UN HQ which would generate real information about the biggest real threats and technically valid solutions (using peer review, two-way communicatins and the other tools maximized by "the old NSF"). That seemed to peter out, but I found it even more exciting that the office of the Secretary General asked my friend for a more substantive, detailed response:
================================================
Paul,
As an addendum to the letter to the UN Sec-Gen (attached) on the proposed UN Office of Strategic Threats, we were asked to give a paragraph on each of these with weblink(s) on best research so far
--Weakening of the Earth's magnetic shield that protects us from deadly solar radiation
-- Massive discharges of hydrogen sulfate (H2S) from de-oxygenated oceans, caused by advanced global warming
----Malicious nanotechnology (including the "gray goo" problem)
----Loss of control over future forms of artificial intelligence
---A single individual acting alone, who could one day create and deploy a weapon of mass destruction (most likely from synthetic biology)
---Nuclear war escalation
---Uncontrollable, more-severe pandemics
-- A particle accelerator accident
-- Solar gamma-ray bursts
---An asteroid collision.
=============
I replied with technical details, and then summarized the big picture:
----
Key points:
1. It is not a risk that AI or internet of things will go out of control. It is a certainty. The competitive forces (both political and economic) guarantee that higher and higher levels of true artificial general intelligence will be deployed. Smarter systems will dominate over less intelligent and capable systems, in the fast growing world where the number of controllable "things" has already surpassed the human population and "intelligence" is what controls the signals which get through.
2. The massive transformation of the earth already underway is almost certainly NOT unique in our galaxy, let alone our cosmos. It is part of a massive long-standing system of evolution, depicted in many new actual photographs like the one attached. If we adapt intelligently enough to the new cyberearth, before entropy catches up with it (as it is already doing in our out of control world internet), we can become like the living connected nodes in this vast network of dark matter connecting and shaping small nodes like our galaxy. But if we reject the needs of adaptation, and do not keep up with greater intelligence and connection between humans, cybersystems and the living natural world of our solar system, we can easily be washed away by the kinds of currents already destabilizing markets, legal frameworks and battlefields already. Highly intelligent decisions can learn to be cooperative, or even be designed for true social connection, but to try to "control" them is utterly unrealistic. No one should trust those who promise to do so.
3. Only the highest, more connected exercise of our human potential, our connections with nature, and the very highest levels of artificial general intelligence (AGI) could give us hope of surviving and keeping up with the visible competition across our cosmos. True AGI is based on GOALS or VALUES, as in the most powerful new breakthroughs we now have in RLADP, defined in what I sent you already, and in the links. (I hope some of you can recall/retrieve that, and use somehow.)
Just FYI, I attach two totally new papers, one an abstract accepted to the IEEE QCE21 conference and one submitted yesterday to an Elsevier journal (for which odds look quite good). These give links to the best AGI known to most people yet, but also a pathway to build Quantum AGI (QAGI), a level of AGI which has not yet been discussed by anyone else before. There is simulation work and new experiments which I hope I can discuss more publically before too long.
================================
=================================
The SLIGHTLY more complete explanation of the risk I sent before this:
Hi, Jerry!
You asked me for climate and James for grey goo. But I feel I owe you a few comments on 4, uncontrolled AI.
By the way, IEEE has just widely distributed the announcement of my winning the Frank Rosenblatt Award, their top award for Computational Intelligence:
2021 IEEE Frank Rosenblatt Award
“For development of backpropagation and fundamental contributions to reinforcement learning and time series analysis
=======================================
The truth is that very few policy makers understand the real tradeoffs and possibilities, and what is going on under their nose, regarding uncontrolled "new types of AI." That includes the Lifeboat Foundation and the vast communities of enthusiasts and worriers. A major part of the threat is that so many people are making decisions and forming opinions without knowing key basics, even in the software development community itself. By the way, Karl Schroder in the MP circle has a kind of natural understanding of where AGI might REALLY be going much more valid than what the verbal policy "experts" seem to imagine.
But for me, the problem is where to begin in summarizing the full story, and which of the thousands of backup documents to begin with. Maybe it is better done by accepting Metta's invitation to do a (citeable) Zoom DISCUSSION, based on your questions, after you have a little time to scan this email and sources like http://www.werbos.com/How_to%20Build_Past_Emerging_Internet_Chaos.htm. Or some of the youtubes,
which range from trying to simplify, to hard core technical reality in the very best venues.
ONE WAY to pose the issue is to ask: "What could AGI do to us? Will it be uncontrolled?"
One way to DEFINE AGI is .. a type of information processing systems designed to LEARN to maximize some kind of hardwired utility function. True AGI are an integration of two hard core universal technologies, the PREDICTION and the OPTIMIZATION aspect. ANYONE WHO INSTALLS A TRUE AGI must hardwire a decision or system to evaluate the bottom line of what it is supposed to maximize. This is an absolute unavoidable reality; efforts to wiggle around it just hide the path and raise the risks.
There are LEVELS and LEVELS of learning and cognitive capability, from the lowest lamprey kind of artificial brain, to rat level, to human level, to sapient level, and beyond, to new types of quantum and multimodular systems which most people think are science fiction but which others are quietly deploying already in ways which start to control human lives.
LIMITING the level of AGI intelligence is no solution. NSF and NASA once studied plans for lunar development which would exploit the moon by deploying hordes of "artificial metal cockroaches" -- systems intelligent enough to survive and dominate, but not refined enough to benefit humans or even avoid longer-term risks. The PRESENT TRENDS in AI deployment, both in governments and corporations, are very similar or worse, moving towards what we call a "Nash equilibrium", very popular among many program designers but likely to cause fatal instability in the global system, by many paths, not least of them slaughterbots and human unemployment. Many believe slaughterbots are just sci fi, but some of us have seen them.
One source proving that you cannot trust what most hopeful "experts" tell you is: http://1dddas.org/activities/infosymbiotics-dddas2020-october-2-4-2020/dddas2020-video-presentations
The sheer diversity is amazing. RTX and PWC, however, already demonstrate capabilities others consider impossible, and the new IEEE work goes well beyond that. Luda and I have new papers on how to build quantum AGI (QAGI), a huge step beyond
the AGI we designed in the past, and I am very worried about where to place them to navigate between being too open and being too restrictive. IEEE QCE2021 has accepted my abstract connecting QAGI to climate, but where to put the mathematical details and how to navigate dysfunctional politics? "Nash equilibrium" is another term for war of all against all.
Best of luck,
Paul
More on the actual AGI situation:
My concern is that AGI development is like riding a bicycle, where
slowing down OR EVEN NOT GOING FASTER ON THE DEEP level may be riskier
and more unstable.
PRESENT trend is a "Nash game" of developers working for governments
and corporations, moving fast to deploy top down low intelligence
solutions. Ironically, part of this is my fault, because they are
moving on the path up to mammal-level AI maximizing a central U over
time, treating humans more and more as things in the internet of
things. They do it because the math and the tools are THERE.
SOME hope lies in Sustainable Intelligent Internet, a different KIND
of optimization design, based on RLADP with maximum full use and
development of mundane human potential. That's another level of design
challenge, NOT BEING advanced in a mathematically well-grounded and
integrated way ANYWHERE yet. It is POSSIBLE, as I outlined.
But in truth, Pareto optimality and issues like climate survival
require MORE, and that is why quantum and noosphere connections are
important. The utility determining system is crucial to the outcome.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment