===================
===================
On Wed, Aug 15, 2012 at 4:27 AM, Stuart Armstrong
The verdict? It looks like AI predictors are fond of predicting that AI will arrive 15-25 years from when they made their prediction (and very fond of the 10-30 range). And there is no evidence that recent experts are any different than non-experts or those giving (incorrect) past predictions.
This is not an isolated phenomenon. When I was at DOE, people had a saying "Fusion is the energy source of the future.
It is 20 years in the future, and always will be." And similar things have been said about Brazil, on again off again.
None of this is the fault of intelligent systems technology (which I do not equate with "AI," which is a human culture),
or of fusion. It's more a matter of the specific folks who declare that precognition does not exist to any degree in any form, and then
declare they have a perfect exact understanding of what the future will be. Or folks who need
research funding, who calculate that they need to say these things to get it.
With intelligent system, as with access to space, it seems to me that objective reality allows some radically different possible pathways,
and that human whims will play a central role in deciding which path we take.
=========================
=======================
Then after more discussion:
Maybe I should say a little more about this... It's a bit like China, where I've been involved in the details
(and with sensitive personalities) enough that it's a bit awkward to respond to, but I'll try.
On Wed, Aug 15, 2012 at 3:13 PM, brian wang
Memristors can be made into synapses. It seems like HP and Hynix will commercialize terabit memristor memory in 2014 that will be faster than flash and more dense.
Synapse like memristors are things that DARPA is working on for neuromorphic chips. It would seem by 2017 there will be terascale neuromorphic chips.
Europe could fund the Human Brain Project for 1 billion euro. Announcement in February 2013.
http://nextbigfuture.com/2012/08/human-brain-project-awaiting-february.html
Google has used a neuronet to identify images without human assistance.
http://nextbigfuture.com/2012/06/google-develops-artificial-intelligence.html
==================================
First, let me stress that nothing I say on this subject reflects the views of NSF, which really does
contain a great diversity of views. However, it is informed by lots and lots of NSF resources, as you can see by looking at:
http://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm
Proposals for "Cognitive Optimization and Prediction" (COPN) are also still welcome in the core
program I handle at NSF ("EPAS", a valid search term at www.nsf.gov), though: (1) the core program
usually doesn't fund anything larger than $300,000 over three years or so; (2) interesting proposals have come in and been funded in that area
from the core, but not so many as the COPN group which is now in its last year; (3) most folks seriously interested
would benefit form additional material which I routinely send out in response to inquiries; (4) I am only one
of three people running EPAS, and COPN is only one of the topics I handle.
----------------------------
Hard to keep it short....
The human culture of AI in computer science has its own deeply embedded folklore, in some cases similar in my view to the
deeply entrenched belief in Adam and Eve in the human culture of Christianity (much of which would probably be seen as an abomination
by Jesus Christ himself, who really wasn't into stoning women).
Part of that ancient folklore is the snappy phrase: "You don't build modern aircraft by flapping the wings."
(Actually, if aircraft designers had been as religious about that as some orthodox AI folks, I wonder whether
airplanes would have flaps? How much were flaps a key part of the breakthrough by the Wright Brothers,
which was NOT a matter of inventing lift but of inventing a way to control the system? Please forgive the digression...)
One of the key accomplishments of COPN was that we got critical funding to work by Yann LeCun which the
orthodox folks were bitterly opposed to, which probably would never have reached any degree of visibility without
that intense special effort by us, which orthodox folks even threatened to sue me personally for allowing to get through.
(Did I mention "sensitivities"?...) There were many orthodoxies opposed to the work... but perhaps the strongest
was the "flappers" who noted that LeCun wanted to use straightforward clean neural network designs to move in on
territory which orthodox classical feature designers viewed as their own. If that orthodoxy remains in power, then, in my view,
we will never get so far as mouse-level artificial general intelligence, let alone human.
Once LeCun, and his fellow PI Andrew Ng of Stanford, had a sufficient level of funding ($2 million), and a mandate, it didn't
take them long to achieve tangible breakthrough results, disproving a whole lot of the ancient folklore. They broke the world's record
on tough benchmarks of object recognition, in images of many thousands of pixels, using clean neural networks to input
those complex images directly. Since it was a general-purpose learning approach, they found it relatively easy to just try other
benchmarks, in phoneme recognition and even natural language processing. A year or two ago, there was a list of seven world's records..
but it has grown... and of course, there have been a few groups replicating the basic tools, and a few groups doing tweaks
of what produced the breakthrough.
The story gets more complicated from there, but a few key highlights:
(1) There is a small but important "deep learning" group which has drawn energy and respect from that success, but is certainly NOT
assured continued support in the future, now that COPN has expired, and massive reactionary forces are quietly licking their chops;
(2) Seeing that success, DARPA immediately funded a whole new program, in which they tell me the bulk of the money
went to two awards -- one under LeCun and one under Ng, with an evaluation component led by Isabelle Guyon, whom we
previously funded to run some important competitions -- and then Google got into it, did further follow-on funding,
and seems to have effectively integrated the new technology into an interesting and promising
larger plan.
(3) Important as this was, it was just the first of about a dozen new steps we would need in order to get
artificial general intelligence (AGI, to use Ben Goertzel's expression) as powerful as that of the mouse.
See www.werbos.com/Erdos.pdf for a discussion of further required steps, on the "cognitive prediction" half of the roadmap.
(I also have a couple of papers in Neural Networks, mainly the one in 2009 which probably led to my Hebb award, on this roadmap.)
WHAT OF PROGNOSTICS?
Ignoring the hardware side for now, and considering only the NECESSARY architecture/design/algorithm aspects...
If I look at the actual rate of progress of human society in this field over the past 40 years, and compare that with what the roadmap requires,
I'd say that the probability we get to mouse-level artificial general intelligence by the year 2100 is somewhat less than 50%.
In fact... COPN was quite unique, and simply would not have happened without me. Likewise, NSF's earlier Learning and Intelligent Systems initiative,
which helped resurrect the field of machine learning (which orthodox AI was then strongly opposed to, with a huge amount of Adam-and-Eve folklore),
wold not have happened without me. (I still remember the exact conversation between me, Joe Young, Howard Moraff and Andy Molnar where
we worked this out -- and what led up to that conversation. Long story. I have not claimed that it was ONLY me, of course; other players, like
Joe Bordogna at NSF, also played special roles.)
Since I get to the usual retirement age in about two weeks... it is not at all obvious to me that anyone else is really about to push people ahead
on this kind of roadmap. There are certain obvious forces of inertia, stakeholder politics and conservative thinking which have caused just about every other funding
effort out there to gravitate towards a different course. Even if COPN itself were to be re-run at the special 2008 level of funding or more,
not many people would have funded LeCun and Ng under those circumstances. Most government funding people are simply not so deep into
things. On balance, I see hope that the rate of progress on the roadmap MIGHT be made faster... but the trends now in play look more
like a slowdown than a speedup.
More on the larger implications, but then...
WHAT OF HARDWARE?
Mouse-level intelligence (let alone human level) requires a COMBINATION of algorithm/design/architecture technology (above)
plus the sheer horsepower of "having enough neurons."
As Brian mentioned. there is a lot of rapid progress on the hardware front.
An old friend of mine, Jim Albus (whose memorial/funeral conference session I attended a few months ago) happened to be close to
Tony Tether, director of DARPA under Bush. At the funeral, they played a video where he laid out the real history and rationale behind the
big SYNAPS program, under Todd Hilton, which was DARPA's first big plunge into this specific area. The program funded
three big projects, at Hughes, HP and IBM. (Tether had a philosophy of not funding universities... "That kind of high risk basic research is for NSF.
We do translational research." I have an impression he made it much harder to do joint DARPA-NSF things than it was in the past.)
IBM certainly put out press releases saying things like "we have already built a cat brain level, and human is coming soon." But so far as I
can tell, what they MEANT was that they were building hardware to implement that number of neurons, FOR A SPECIFIC model
of how neurons work; many of us believe that these early simplistic neuron models have almost no hope of providing general intelligence
at the level of a mouse, no matter how many units get inserted into them. What's more, a lot of the hardware designs were very specific
to those specific models. Many of us were reminded of the many early unsuccessful (technically unsuccessful but politically very successful,
like the popular image of Solyndra, but actually worse) efforts to go "from wet neuroscience straight to VLSI, without math or functional analysis in-between."
In the end -- HP learned a lot from this effort, and from other things they were doing independently, and chose to depart from the SYNAPS program,
so as to get more freedom to develop more flexible hardware, more relevant in my view to the goal of mouse-level AGI.
The other key development was HP's discovery of memristors out there in the real world, inspired by their understanding that key 20-year-old papers (in this case by Chua)
can still be relevant today. The 20 years from Chua's paper to HP's discovery is another example of how fast science is moving these days.
As I understand it, there are two new seminal books on the memristor (and neural-capable hardware) story out there. I think of them
as "the HP book" and the "Chua book," though both are edited collections, the former edited by Robert Kozma et al (and certainly now out,
from Springer) and the latter by Adamatzsky (past the galley stage, but I am not sure if it's out quite yet). Maybe the progress will
be faster on the hardware part of the roadmap, but I wouldn't take it for granted. They really need the architecture side to get the large-scale
apps needed to keep these things moving forward. Also, there are questions about the overall rate of progress in science...
LAST BUT NOT LEAST: THE CULTURAL/SOCIAL CONTEXT
In predicting the future of artificial general intelligence (AGI), as
in predicting other technologies, it's also important to think hard
about the larger cultural
and social context. I've seen that over and over again, in 24 years at
NSF (not representing NSF), across a wide variety of fields.
The US is going through a pretty tough time right now, in many ways.
(The US, EU and China are not even out of the woods yet
on the threat of Great Depression II happening within a couple of years.)
When I do think about what's going on, I do sometimes think about
Atlas Shrugged or the Book of Revelations (as do many people),
but I also think about Spengler's Decline of the West, which was a
bigger part of my own childhood upbringing. (I was lucky
to go to a school where lots of people were into Toynbee, but being
proud of German roots, I did have to read the German version of the
story,
just as I much preferred Goethe's Faust over Marlowe's character who
reminds me more of the Irish side of the family and Paul Ryan.)
The real history, from Toynbee or Spengler or other sources, makes it
abundantly clear that technology does not always move forwards.
I have seen the same myself, right down in the trenches, even in very
hard core science and technology.
Spengler argued that the core living culture of a society, like the
green wood of a tree, tends to die off much sooner than than the hard,
petrified
technology which that culture initially produced. And so... the
EXISTING successes of folks like LeCun and Ng seem well-posed to
migrate into
Google. Google and its best competitors seem well-poised to thrive for
a very long time...but what about the next big things?
A lot of the next big things do come to NSF. Again, not speaking for
NSF, I do see some things here. The little division where I work
has about ten technical people, responsible for the advanced, next
generation basic research all across computational intelligence
(including COPN), electric power, chips, photonics, wireless
communications, nanoelectronics (and the biggest single chunk of
nanotechnology in general),
and a lot of advanced sensors and so on. It used to be that NSF only
accounted for about 30% of university basic research in such areas,
because of basic research from places like DARPA and NASA and the
triplets of ONR/AFOSR/ARO. (Four of us six are now
within just a few blocks of each other in Virginia.) But Tether made a
massive shift at DARPA, Griffin at NASA, and 6-1/6-2 came under
heavy pressure in a lot of places elsewhere, which in any case had a
very focused agenda. As it happens, both Tether and Griffin
shifted more of the money of their agencies to Big Stakeholder corporations.
This was one of the reasons why I saw funding rates in my area drop
practically overnight in the "Halloween massacre" of 2003, down from
about
30% to 10%. If the number of people working in a field has a
connection to the amount of money... one might expect the underlying
communities to shrink by a factor of three, as discouragement and such
work their way through the system. But other factors were at work.
Also, folks in Congress concerned about the erosion of America's
technological strength (which depends not only on new ideas
but on students produced by university faculty in these fields)
promised to fill in the vacuum, bit by bit.
For myself, I had to zero out at least two emerging areas as a result
of the Halloween massacre -- advanced control for
plasma hypersonics, and quantum learning systems (which we had
pioneered for a bit). We learned a lot from those efforts,
but who knows what will survive? The latest crash of X51A is an
example of what DOD is now doing on more uncorrected politically
correct lines.
Perhaps the longest-lead indicator of new directions is the mix of
what we see in CAREER proposals, which I looked at
just yesterday. They are our future.
============
=============
But at the end of the day, I keep telling myself that I shouldn't
worry so much about the low rate of progress
(or negative progress) towards hardware AGI. After all, there was that
Terminator risk, which was MUCH closer
to reality than most folks would imagine.
http://drpauljohn.blogspot.com/2009/10/could-terminator-ii-come-true-true.html
Maybe the most important value of this work is how it can inform our
understanding and use
of our own natural intelligence (as in one of my papers this month in
Neural Networks).
But then again, we could really use more intelligent logistics
systems, power grids and financial market systems
(as partially reflected in one of the other papers), if we can
understand and avoid the risks
more seriously than we seem to yet. My wife sometimes says I should
just go ahead, retire from NSF
and build some of these systems myself -- but there are other things
to think about. Like what to do realistically
about those risks, as in Bernanke's recent question: "How can we
arrange things so that this giant machine we are
building really serves the larger goal of serving human happiness, and
not just producing some new kind of slavery?"
Best of luck,
Paul
Myself, I don't think we'll see "human-level intelligence" in the traditions of sci-fi, ever. This is, admittedly, colored by my own bias towards where I'd like to see computational technology go, but it's not pessimistic.
ReplyDeleteI think we will see computers and human interface advance in a few breakthroughs that will naturally evolve as our CI tech becomes more advanced and more integrated into our existing technology infrastructure. In particular, I do think we'll eventually see neural interface, and with that, the integration of the computational speed and memory of "traditional" computers with the latest CI refinements and the consciousness of the human.
This isn't some trans-humanist daydreaming (though it IS daydreaming); we'll still be humans in our own bodies. We'll just be able to much more directly interface with, and in some ways expand our personal "processing power" into, our personal computational devices. I don't think we'll see "true human-level AI" develop after that except, perhaps, as an interesting pure research experiment, and even then it will be somebody's personal project with the funding that usually accompanies such things.
Instead, we won't have a real need for independent AIs that can think like we do. We'll provide the consciousness to control our computers and our machines with human-like behavior ourselves.