Monday, August 27, 2012

views of the election from analytical to humorous

I do not have time today to make this coherent...

I am committed, both as a government employee and as a Quaker, to being as helpful as possible to good people (and even to bad people when they are trying to do good or redeem
themselves), regardless of party. I am registered as Independent. I worked
for Arlen Specter in 2009 -- making the choice when he was Republican -- because of
some similarity in philosophy.

But in 2012... I am real worried both about the possibility of Great Depression II and
and about war from the Middle East. Doing due diligence as best I can... well, the truth is that I gave a little money to the Obama campaign the morning after Romney picked Ryan,
and a smaller bit even before.

There is some irony here because Ryan may even be a kind of cousin, through the Donohue-Ryan clan on my mother's side. Most other political Ryans we heard of
when I was younger did turn out to be relatives... but that won't stop a depression.
Like Ryan, we're uinto numbers.. but if you pick the wrong numbers to maximize, you're dead. You can't run a planet the way you run a grocery store in a big city.
Nothing personal.

As I think about the lack of the deep dialogue we need to do better...

I think about the Koch brothers.

A lot of amusing things seem to be coming my way about them.

Luda and have debated: "How do you pronounce Koch in this case anyway?"
I'd say it's a bit like how you pronounce "Paris"... Paree originally,
Pariss if you Americanize. So the original Koch may have "ch" as in "charisma,"
but these particular folks... aren't they now "ch" as in cheat?
To my intellectual friend Koch, ... HE is a "charisma" type Koch, but I don't
know if they are relatives.

A conservative newspaper, the Examiner, showed up on our door with a portrait
of Koch labelled "the Demon of the Left." I have to admit I found it quite striking.
From the press on the Koch brothers, it is true that I had unconsciously imagined
something more like Cheney, like a kind of Sith Lord. But the face I saw this
time was VERY different... and I basically recognized it, every unconscious
twitch portrayed by the apparently very able artist. It looked exactly
like a friend I was once very close to (who, by the way, would resent the comparison,
just as one of the folks I fund is aghast hat everyone sees him as kind of
hidden brother of Romney). That guy, like Khamanei, is partly in trouble because
of the way he is always in the shadow of his father... and because his awe and sense
of obligation keeps him from true independent thinking and from the kind of energy his father had. There is much more to be said... but it's scary.

In a similar vein... I just went to Barnes and Noble, which I don't do all that often
any more, but Luda suggested...

And as I went to buy a book, I bumped into a book prominently displayed:

"An Alien in the Family, by Gini Koch."

OK, I did have to follow up... I bought volume I, along with a paperback by the author
I was actually looking for (a guy from Salt Lake City).

In fact... I hate to say... there is a certain degree of mental aberration
in that face... and that friend of mine was medicated for manic-depressive tendencies.
(U suppose Khamenei needs stronger stuff... unless he is now just a hopeless captive of Stalin type ideological apparatchiks.) It did remind me of some novels by
Van Vogt and whatnot.. lso near the Koch book.

As we left Luda pointed out another book we should read someday, by Ronson, on psychopaths... I had enjoyed his previous book Men ho Stare at Goats.
Ronson's quick description of what a psychopath is, on the cover..
did remind me of what kind of risk we do seem to be facing...


Just a couple of days ago, the local Express newspaper reported how Paul Ryan
chose "Welcome to the Machine" as his favorite music. And said how he had been working
to prevent our all being eaten all alive by that great machine. The original musician said: "You don't get it, man. You have BECOME the machine." Exxon has become
a big international government, making rules over its minions, at least as serious
as the one in Washington. Totally getting rid of checks and balances makes
things worse, and REDUCES human freedom. As an undergraduate and an active Young Republican, I was a bit disgusted when I saw Galbraith's book... arguing that
the best hope for freedom now is a kind of three-way balance of power, with
democratic government, industry and labor all providing some balance.

But now, in retrospect... I wish
I had advised Specter to go ahead and vote for card check in 2009, when I was close
to his legislative assistant covering that area... but we all live and learn.

Reagan really wanted to attack corporate welfare and corruption... and it is
incredibly sad how diametrically opposite so many of the people in the party have gone
in just the past few years....


In 2009, I saw the characteristic failings of the left every day...
but Obama seems to have learned a lot on the job, while analysis
of the Ryan plan... well, it doesn't really work. We can't afford a new
great depression right now. What worries me are not the similarities to
the Irish side of the family, but to some parts of the German side.

Best of luck,


Saturday, August 18, 2012

how did we lose the world's lowest cost solar farm producer?

I would still be grateful for more information on this story, and even some
blunt and detailed one-on-one feedback, but I do begin to see the picture, I think.
Great thanks to those who have sent me some information.

COST is the biggest issue in deciding whether we can really make the switch to renewable, carbon-free sources of electricity, which will never run out of fuel.
The difference between 10 cents per kwh and 20 cents may not sound like much, but if we met the whole world's electricity generation needs (20,000 terawatt hours in 2008,
according to IEA/OECD data) that would be the difference between the world paying $2 trillion dollars per year versus $4 trillion per year. Going carbon free will be a whole lot easier if we don't have to ask for an extra $2 trillion/year, and can even compete in a large part of a free energy market (if we ever get one).

So I was shocked when the only group building solar farms that could hit 13 cents per kwh were killed this year, while other companies selling for 20 cents and up were doing just fine, in US and Europe. This is as serious as it gets, folks, if you really care about he future of humanity.

I think I have a better idea today what really happened, but am still looking into it...

Here is a posting to some expert energy lists on what happened and what we can learn from it:


-- though it is not definitive yet, by far, I have begun to get some feedback
from folks in Silicon Valley who have looked a bit more into the SES story.

One of the toughest problems in real world science and technology is how to filter
out various kinds of herd instinct and groupthink.

As an example, a few years back, I was a little proud when I funded a small project
which made progress in telling us how to built useful carbon-tolerant alkaline fuel cells
(AFC ). (The report by Urquidi-MacDonald, Sen and Pat Grimes was posted at, easy to find, though I recently noticed a book and articles by Patrick Grimes
on fuel cells, via Google Scholar, that Pat never told me about).

I was pretty happy when that report was filed, and mmediately contacted the guy who runs
the main fuel cell program at NSF.

Roughly, I said, 'Hey, here is something which works. You might want to follow up."
(None of those PEM fuel cells, the main thrust of the big Al Gore PNGV program,
ever really did. More precisely, unsolved problems with catalysts resulted in continued
hydrogen peroxide formation, which made them unable to achieve "whole systems' efficiency
more than internal combustion systems, and which seriously cut lifetime and raised cost.)

His response: "Yeah, maybe it works, but that's not where the money is." (And that's not where the main thrust
of Al Gore's PNGV program was either.) NOTE THAT I DID NOT NAME NAMES: that would be unfair in this case,
for a number of reasons. That guy did some good and important things, and also missed a few things, which averages
out a lot better than most people.

I was a little shocked at the time. It's our job at NSF to focus on things which DOE is NOT already doing,
to pay special attention to things which are new and unique, which offer breakthroughs.
To compensate for the mindless herd conformity, and short term needs, which exist elsewhere.
It's supposed to be a little like contrarian investing, where we looks especially hard for new opportunities which result
from holds in the market. It is a highly competitive system (so competitive it may overstress human nature
in many ways).

In fact, the market economy, like NSF, is also supposed to be a competitive system. Sometimes,
in making funding decisions or orienting panelists, I use the analogy of a supermarket.
There are certain products (like some spices and some hygiene products and some medicines)
which most people would vote against including, if it were a majority vote; indeed, some products
are a matter of life or death for some customers, but a majority of people who come to the store
would pretend even not to see them. Stores can be more efficient than a simple majority vote on
what to put on the shelves, because they account for diversity, the many needs or market segments.
They don't work perfectly, but they do work a lot better than a simple majority vote by all shoppers on
every product as to whether to put it on the shelves or not.

But again and again in science and technology, I have seen cases where considered
vague and shallow conventional wisdom, by folks who have not looked deeply into an issue,
ending up freezing entire directions for a long time or even forever. Lately,
that often seems to be the rule rather than the exception in the US.
I have seen generation after generation of recent NSF leadership exhorting us all to be a bit less
conservative, and really pay special attention to breakthroughs and
transformative new opportunities -- and also seen lots and lots of backlash, some open and some
ala Good Soldier Schweick (a famous World War II movie about a Czech who subverted the Nazis simply
by actually following their orders VERY diligently in his own unique way).

I just posted one set of examples, involving intelligent systems, on the Kurweil list,
which might amuse a few of you:

It seems that a major problem for SES was simply a conventional wisdom, not unlike the
former belief in PEM fuels with cars carrying hydrogen as such in a fuel tank as the
one and only possible future for transportation.

I knew it was there, but not how far it had gotten into down to the level which could
bankrupt a company like SES.

There is a quasi-religious belief, particularly strong in silicon valley, that "of course PV solar farms
must be cheaper than any kind of solar thermal, if not now in the near future -- so we should
not waste any money or opportunity on the latter. This is true, because PVs are chips,
and chips are governed by Moore's Law. Chips are inherently cheaper than weird things
with pistons and temperature and pressure in them. Much simpler and more elegant.
We don't need numbers to know that solar thermal must be more expensive.
Besides, even though Arun Majumdar admits PV solar farms cost 20 cents per kwh, he has a clear target
of 5 cents per kwh, which would be less than and solar thermal solar farm on earth today."

Of course, it helps that lots and lots of PV people are getting money today, just as PEM people were ten years
ago, and are always testifying to everyone about how their is The True Path, and getting cited as
The Experts (qualified of course because they got money from a PEM program).

But in fact .. 5 cents is a TARGET for ARPAE. In fact, lower PV costs have been a target at DOE for a long long time.
There was a brief time not so long ago when OMB required that agencies actually POST quantitative official targets,
and be "held accountable" for actually reaching them. At that point, the cost target for solar thermal
was much lower than the target posted by DOE for PV solar farms, and the targets DOE was willing to
be held accountable for were much higher than what NREL, for example, was posting and projecting at that time.
It makes good sense to AIM for very ambitious targets, and to try to set things up so that one can
make a good profit even in a relatively worse case scenario. (That's part of why I propose 15 cents per kwh
as a price/market guarantee for solar farms of all kinds. Much better can almost certainly be achieved,
in my view, with really effective solar thermal solar farms using dish solar, but an open market would
allow PV folks equal access.) See a follow-on posting below on why the conventional wisdom here is
pretty silly..

In my view, SES did make a business mistake circa 2000 when it signed a purchase contract with SCE and PGE
committing to a price well under the 11.5 cents per kwh those utilities then paid for peaking power from natural gas.
(That's from the Business Week story where SCE or PGE announced the contract.)
When they found they couldn't replicate STM technology as easily as they had expected (as happened to
a few bigger companies who tried to reinvent STM's Stirling engine), Sandia tells me that they were
basically stuck with costs of 13 cents per kwh for the initial solar farm, not enough to make a net profit
or build up any capital reserve under the initial contracts. That by itself would be enough to
kill them, unless they could find new sources of funding to tide them over to their projected 10 cents cost,
or unless they could restructure the contract to a little more than 13 cents. At that point, the local conventional wisdom and
the PUC (with the new cost gas option) certainly kick in, with or without NIMBY and right of way issues to make it worse.

Thus it really looks like two causes at work -- this math, operating on a small company without that solid
15 cents market guarantee, AND the NIMBY or right-or-of-way regulatory stuff which prevented them from just
going ahead and quickly completing the solar farm they were into before the funding fell apart.
The investors mainly blame the problem on the latter, because the regulatory stall is certainly
what happened first. (I DID track the primary sources back at that stage). If the regulatory stall had not happened,
the working numbers from a working if money-losing solar farm might well have changed the final,
financial outcome for the firm.

If the regulatory obstacles which stopped the construction could be basically eliminated for this kind of badly needed new thing, or substantially streamlined, our chances would be a whole lot better... in California or wherever else such problems get fixed.

Again, more detailed data form STM, Infinia and from the Guang Dong investors is also consistent with
this picture, and clearly unknown to the general herd here.


In my view, solar Stirling is the LEAST risky
and most definite source of affordable renewable electricity on a scale large enough to meet all
the world's needs... but I personally have some special responsibility to do what I can to explore
higher risk things which the herd would not really bellow at, because they would not even see them.

Best of luck,


More on the conventional wisdom about PV versus solar thermal:

In that posting on what happened to SES, I forgot to add some comments
on the conventional wisdom I was deploring. I am not about to give full cost breakdowns here,
but when people get confused by false analogies, it is important to try to balance them with some
other fact of life, which focus who are REALLY aware of the technology realities should already know.

PV cost is not governed by Moore's Law, which has mainly been about ever shrinking feature size
to process bits of information with greater density and speed. The primary cost problem is
not the cost of PV panels as such, but the cost of "balance of system."

For example, people talk a lot about cost goals like $1 per watt. (As you know, I have paid a lot
more attention to cost per kwh, so I can't be as precise about per watt. Both alsovary a lot as a function
of local sun conditions.) But last I checked with one of the nation's real front line experts on power electronics
for solar and wind, he said it already costs about 75 cents per watt just to convert DC to AC. That's important,
because PVs produce DC, while Stirling generators produce AC directly. The power electronics with PV
farms are actually much more complicated than what I have just discussed, but there is certainly no
law of nature that systems which require extra power electronics much be cheaper than those which do not.

Some folks talk about how our new R&D thrust can result in PVs as cheap as paper, coming out on rolls.
"What could be cheaper than that?" But in fact, the class of PVs which can now be rolled out like paper
are lower efficiency and weaker systems, which require more area than more solid PVs, and thus MORE
of the balance of system cost, which is the main problem here. (R&D to improve their efficiency,
and to improve power electronics is worthwhile, but not exactly a bird in hand. Better power electronics is
more likely to produce big breakthroughs in plug-in hybrids, for technical reasons, but the underlying R&D has value
in any case.)

What's more, if you ask about cost per meter, I would ask whether a square meter of PV AND required structural support
will ever be able to compete with the cost per meter of simple dumb reflectors ( mirrors) made in conventional auto body parts factories.
Infinia and folks linked to STM and GM have shown how the dishes in Stirling-dish system can be made that way.
This has HUGE implications for scaling up. If Stirling engines are produced in existing underutilized automotive engine
factories,. and reflectors in underutilized auto body panel factories, it becomes possible to really scale up and achieve mass production
economies of scale a whole lot faster than any other path, without the time, delay and cost of building new
"green field' factories. No weird and exotic materials needed.

With lower cost per square meter, higher efficiency in converting collected light to electricity,
and less cost in power electronics...

It is certainly not an option to be ruled out apriori!!!


But again, I have no personal economic stake in that, on my own or via NSF.
It's all just a matter of trying to pay serious honest attention to larger needs which will
affect every one of us.

I do have a personal stake in an alternative technology, from the world of Moore's Law,
which I will try to do my feeble best to push ahead today, in parallel with this other stuff.
I most likely will screw up the next stage, since I am neither a lawyer nor a corporation,
and have neither I can rely on for this purpose right now, but there are times when one
has a duty to try.


Also -- I really do not mean to question the value of major investment in PV technology. Breakthroughs may be possible. In fact, I have recommended funding for
a number of breakthrough-oriented PV projects myself, which have been funded.
But all the data I see says that the best solar thermal is cheaper now, and a lot more solar thermal has been deployed around the world in actual use in electric utilities.
We do need a diversified portfolio -- but it's a real shocker that we have neglected
most definite least risky low-cost option we have.

questions about my post on stages of intelligent systems

am impressed by a lot of the very intelligent questions you and others have been asking, so I do owe some response,
even though it has to get more controversial at some point. (Not that the last one represents any kind of consensus
position either!)

Roughly, I group them in two parts:

1. "There are certain details not clear to me, from, about the roadmap
to mouse-level AGI. Can you fill in?"

2. What about human-level AGI, consciousness, and the possibility or reality of AGI at a higher level?

Were I to devote the rest of my life to this area, maybe I would split my energy 50-50 between those
two questions. I mainly advocate focused research in two coordinated parallel tracks, reflecting these two.
(In a way, there is an analogy here to "energy policy." People who look for solutions to
"the energy problem" don't engage so well with reality, because there are basically TWO big energy challenges,
which are far less tightly coupled than most people imagine: the problem of getting to sustainable fuel for
cars and trucks, primarily, and the problem of getting to renewable electricity -- in my view. Car technology
is central to the first of these. Another story.) 1 and 2 here are more coupled, logically, but it's still
important to keep them distinct.

I view 1 as the number one grand challenge to mathematically well-defined science as a whole in the coming century,
as important, clear and discrete a target as the target which Newton achieved in his time. We need to fill
in the details to the point where we have working example of universal AGI systems which capture the main fundamental capabilities
of the cognitive optimization and prediction (COPN) abilities of the mouse brain. (That URL
I posted to the NSF COPN web site gives more details of what I mean here. Those details, unlike these emails, represent some kind of
official NSF position; I could say more about "how official," but that would be a complex digression for now.)
In a more rational world, the COPN topic would be receiving much larger, continuing focused support, with decisions made by people who actually understand
that brief program announcement.

For 1 -- I cited, which isn't light reading and which was written for pure mathematicians and only deals with the cognitive prediction half, because
the mathematical principles are what really matter here, and because some other papers with more details are mostly
harder to read and less recent. HOWEVER: for those of you who have access to university libraries, my paper in
the journal Neural Networks 2009 says a lot more, and may appear easier to read. Last month, in China, I wrote up
a VERY simple description of the roadmap, for folks who are totally nonmathematical:


To explain that a little more, I need to start moving to the question of consciousness, which I will now get more into here.

Because we humans have SUCH powerful vested interests and hot buttons on the subject of consciousness and humanity, it
is no wonder that objective thinking is harder for us in that area. For example, I used to have a friend who would approach and computer
program or AGI design or idea for them by asking: "Ah yes, but if you build it, WILL it be conscious?" I would claim that
the question itself is flawed. It is the kind of question which is so ill-defined that focusing on it too strictly destabilizes your understanding and sets you behind.
The reason is that the attribute "consciousness" is not a binary variable. Likewise, it is not a continuous variable,
a matter of the AMOUNT of intelligence (IQ, as the B.F. Skinner once claimed. Why do I assert that? Because if we look around us
with open eyes, to the world of experience we are actually grounded in, we see LEVELS and LEVELS of intelligence and consciousness,
like a creaky staircase.

For a more complete discussion of that, click on "pdf":

That book chapter was a plenary talk at the first international conference on consciousness, held that year
in Tokyo; it goes much further than the talk by Chalmers, but is generally consistent with what Chalmers said.
Greenfield attended, and has since said similar things (and spent more time on the "consciousness circuit.").

That's rough... but the roadmap to mouse-level AGI in my view involves four stage:

1. "Vector intelligence"
2. "Spatial complexity intelligence," which mainly exploits symmetry principles as possible
with ordinary parallel hardware in order to better approximate what you may call the Solomonoff prior
3. "Temporal complexity intelligence," which fully exploits multiple time intervals in life
4. "Mouse-creative intelligence," which uses a kind of spatial cognitive mapping to
solve a certain aspect of the "local minimum" or creativity problem.

Cartoon caricatures of 2 and 3 are long familiar in AI -- "spatial chunking" with
"world modelling" and "temporal chunking." But the math to make it work in a general way in a learning
system requires a lot more than cartoons and clever hacks. Also, the use of mathematics does NOT
require an assumption that "the mind equals the brain;" mathematics is far more universal than that. Those of you who
know what a Pythagorean is might reasonably put me in the pigeonhole.


But then what of 2, AGI beyond the mouse? And where is the human in this?

In both of my recent single-authored papers in Neural Networks, 2009 and 2012, I have the same color figure with a mountain on
it which depicts all these levels.

Beyond the mouse level, the next big level up is pictured as the "semiotic intelligence," the kind
of brain or mind which makes full use of symbolic reasoning. So far, that fits with what most
people believe about the world around them.

Then comes zinger number one. Konrad Lorenz once said something like:
"I spent my whole life looking for the missing link, the half-man half-ape creature which
has not yet really achieved sapience. And now, Eureka, I have found him! He is all around me, the
normal modern human walking the streets of any major city."

That's basically my position. If semiotic intelligence exists at all as a discrete stage in brain hardware
design, the human simply is not born at that level. The new underlying principles are all based on the concept of
"vicarious experience," a symmetry not between objects we see but between our selves and other intelligent actors.
We can learn to EMULATE the semiotic level, through learning and disciple -- but not exactly like
that simple URL I just sent you. The 2012 Neural Networks paper says a lot more.

But then, zingers two or three.

It seems clear from the math that human-like semiotic intelligence is not the highest level of AGI which can be built.
It has two clear limitations:

1. It does not exploit what quantum mechanics offers us in increasing fundamental computing capability, which can be assimilated into
AGI to give it more power;

2. The treatment of symmetry is inexact and a weak approximation in some ways, which can be improved by a kind
of "multimodular" design (NOT LIKE today's cartoon multimodular or multirobots coordination systems) which
better exploits symmetry of such modules. (This is maybe more like those science fiction novels where
people swap brain implant disks to quickly learn from other folks -- a beautiful metaphor, but I'm not advocating doing it
with humans.)

An AGI at any level would be justified (crudely) in viewing an AGI at a lower level as "not really conscious."
Thus it is possible to build computers who might assert that human brains, even semiotic disciplined human brains,
are not really "conscious." I see no evidence that the human brain has any hardware whatsoever to
make it into a quantum-level or multimodular-level AGI.

BUT.. I do see evidence that it may be a SUBSYSTEM of such, even with hardware to make that possible.
The same kind of discipline and capability training which allows us to emulate the semiotic level can
naturally extend, in my view, to a certain kind of natural collective intelligence, VERY distinct from the ugly
destructive forms of groupthink and social systems which suppress creativity which often try to hide
behind that sort of concept.


But: this email is already too long, and the details to fill in that last paragraph are not just a sentence or two.

Best of luck,


Thursday, August 16, 2012

"When will AI reach human-level intelligence?" -- My response

On one of the Kurzweil lists, a futurist asked us for a timeline on when AI would achieve various things, like human-level artificial intelligence. First I gave a quick initial response:


On Wed, Aug 15, 2012 at 4:27 AM, Stuart Armstrong wrote:

The verdict? It looks like AI predictors are fond of predicting that AI will arrive 15-25 years from when they made their prediction (and very fond of the 10-30 range). And there is no evidence that recent experts are any different than non-experts or those giving (incorrect) past predictions.

This is not an isolated phenomenon. When I was at DOE, people had a saying "Fusion is the energy source of the future.
It is 20 years in the future, and always will be." And similar things have been said about Brazil, on again off again.

None of this is the fault of intelligent systems technology (which I do not equate with "AI," which is a human culture),
or of fusion. It's more a matter of the specific folks who declare that precognition does not exist to any degree in any form, and then
declare they have a perfect exact understanding of what the future will be. Or folks who need
research funding, who calculate that they need to say these things to get it.

With intelligent system, as with access to space, it seems to me that objective reality allows some radically different possible pathways,
and that human whims will play a central role in deciding which path we take.


Then after more discussion:

Maybe I should say a little more about this... It's a bit like China, where I've been involved in the details
(and with sensitive personalities) enough that it's a bit awkward to respond to, but I'll try.

On Wed, Aug 15, 2012 at 3:13 PM, brian wang wrote:

Memristors can be made into synapses. It seems like HP and Hynix will commercialize terabit memristor memory in 2014 that will be faster than flash and more dense.

Synapse like memristors are things that DARPA is working on for neuromorphic chips. It would seem by 2017 there will be terascale neuromorphic chips.

Europe could fund the Human Brain Project for 1 billion euro. Announcement in February 2013.

Google has used a neuronet to identify images without human assistance.

First, let me stress that nothing I say on this subject reflects the views of NSF, which really does
contain a great diversity of views. However, it is informed by lots and lots of NSF resources, as you can see by looking at:

Proposals for "Cognitive Optimization and Prediction" (COPN) are also still welcome in the core
program I handle at NSF ("EPAS", a valid search term at, though: (1) the core program
usually doesn't fund anything larger than $300,000 over three years or so; (2) interesting proposals have come in and been funded in that area
from the core, but not so many as the COPN group which is now in its last year; (3) most folks seriously interested
would benefit form additional material which I routinely send out in response to inquiries; (4) I am only one
of three people running EPAS, and COPN is only one of the topics I handle.


Hard to keep it short....

The human culture of AI in computer science has its own deeply embedded folklore, in some cases similar in my view to the
deeply entrenched belief in Adam and Eve in the human culture of Christianity (much of which would probably be seen as an abomination
by Jesus Christ himself, who really wasn't into stoning women).

Part of that ancient folklore is the snappy phrase: "You don't build modern aircraft by flapping the wings."
(Actually, if aircraft designers had been as religious about that as some orthodox AI folks, I wonder whether
airplanes would have flaps? How much were flaps a key part of the breakthrough by the Wright Brothers,
which was NOT a matter of inventing lift but of inventing a way to control the system? Please forgive the digression...)

One of the key accomplishments of COPN was that we got critical funding to work by Yann LeCun which the
orthodox folks were bitterly opposed to, which probably would never have reached any degree of visibility without
that intense special effort by us, which orthodox folks even threatened to sue me personally for allowing to get through.
(Did I mention "sensitivities"?...) There were many orthodoxies opposed to the work... but perhaps the strongest
was the "flappers" who noted that LeCun wanted to use straightforward clean neural network designs to move in on
territory which orthodox classical feature designers viewed as their own. If that orthodoxy remains in power, then, in my view,
we will never get so far as mouse-level artificial general intelligence, let alone human.

Once LeCun, and his fellow PI Andrew Ng of Stanford, had a sufficient level of funding ($2 million), and a mandate, it didn't
take them long to achieve tangible breakthrough results, disproving a whole lot of the ancient folklore. They broke the world's record
on tough benchmarks of object recognition, in images of many thousands of pixels, using clean neural networks to input
those complex images directly. Since it was a general-purpose learning approach, they found it relatively easy to just try other
benchmarks, in phoneme recognition and even natural language processing. A year or two ago, there was a list of seven world's records..
but it has grown... and of course, there have been a few groups replicating the basic tools, and a few groups doing tweaks
of what produced the breakthrough.

The story gets more complicated from there, but a few key highlights:
(1) There is a small but important "deep learning" group which has drawn energy and respect from that success, but is certainly NOT
assured continued support in the future, now that COPN has expired, and massive reactionary forces are quietly licking their chops;
(2) Seeing that success, DARPA immediately funded a whole new program, in which they tell me the bulk of the money
went to two awards -- one under LeCun and one under Ng, with an evaluation component led by Isabelle Guyon, whom we
previously funded to run some important competitions -- and then Google got into it, did further follow-on funding,
and seems to have effectively integrated the new technology into an interesting and promising
larger plan.
(3) Important as this was, it was just the first of about a dozen new steps we would need in order to get
artificial general intelligence (AGI, to use Ben Goertzel's expression) as powerful as that of the mouse.
See for a discussion of further required steps, on the "cognitive prediction" half of the roadmap.
(I also have a couple of papers in Neural Networks, mainly the one in 2009 which probably led to my Hebb award, on this roadmap.)


Ignoring the hardware side for now, and considering only the NECESSARY architecture/design/algorithm aspects...

If I look at the actual rate of progress of human society in this field over the past 40 years, and compare that with what the roadmap requires,
I'd say that the probability we get to mouse-level artificial general intelligence by the year 2100 is somewhat less than 50%.

In fact... COPN was quite unique, and simply would not have happened without me. Likewise, NSF's earlier Learning and Intelligent Systems initiative,
which helped resurrect the field of machine learning (which orthodox AI was then strongly opposed to, with a huge amount of Adam-and-Eve folklore),
wold not have happened without me. (I still remember the exact conversation between me, Joe Young, Howard Moraff and Andy Molnar where
we worked this out -- and what led up to that conversation. Long story. I have not claimed that it was ONLY me, of course; other players, like
Joe Bordogna at NSF, also played special roles.)

Since I get to the usual retirement age in about two weeks... it is not at all obvious to me that anyone else is really about to push people ahead
on this kind of roadmap. There are certain obvious forces of inertia, stakeholder politics and conservative thinking which have caused just about every other funding
effort out there to gravitate towards a different course. Even if COPN itself were to be re-run at the special 2008 level of funding or more,
not many people would have funded LeCun and Ng under those circumstances. Most government funding people are simply not so deep into
things. On balance, I see hope that the rate of progress on the roadmap MIGHT be made faster... but the trends now in play look more
like a slowdown than a speedup.

More on the larger implications, but then...


Mouse-level intelligence (let alone human level) requires a COMBINATION of algorithm/design/architecture technology (above)
plus the sheer horsepower of "having enough neurons."

As Brian mentioned. there is a lot of rapid progress on the hardware front.

An old friend of mine, Jim Albus (whose memorial/funeral conference session I attended a few months ago) happened to be close to
Tony Tether, director of DARPA under Bush. At the funeral, they played a video where he laid out the real history and rationale behind the
big SYNAPS program, under Todd Hilton, which was DARPA's first big plunge into this specific area. The program funded
three big projects, at Hughes, HP and IBM. (Tether had a philosophy of not funding universities... "That kind of high risk basic research is for NSF.
We do translational research." I have an impression he made it much harder to do joint DARPA-NSF things than it was in the past.)
IBM certainly put out press releases saying things like "we have already built a cat brain level, and human is coming soon." But so far as I
can tell, what they MEANT was that they were building hardware to implement that number of neurons, FOR A SPECIFIC model
of how neurons work; many of us believe that these early simplistic neuron models have almost no hope of providing general intelligence
at the level of a mouse, no matter how many units get inserted into them. What's more, a lot of the hardware designs were very specific
to those specific models. Many of us were reminded of the many early unsuccessful (technically unsuccessful but politically very successful,
like the popular image of Solyndra, but actually worse) efforts to go "from wet neuroscience straight to VLSI, without math or functional analysis in-between."
In the end -- HP learned a lot from this effort, and from other things they were doing independently, and chose to depart from the SYNAPS program,
so as to get more freedom to develop more flexible hardware, more relevant in my view to the goal of mouse-level AGI.

The other key development was HP's discovery of memristors out there in the real world, inspired by their understanding that key 20-year-old papers (in this case by Chua)
can still be relevant today. The 20 years from Chua's paper to HP's discovery is another example of how fast science is moving these days.

As I understand it, there are two new seminal books on the memristor (and neural-capable hardware) story out there. I think of them
as "the HP book" and the "Chua book," though both are edited collections, the former edited by Robert Kozma et al (and certainly now out,
from Springer) and the latter by Adamatzsky (past the galley stage, but I am not sure if it's out quite yet). Maybe the progress will
be faster on the hardware part of the roadmap, but I wouldn't take it for granted. They really need the architecture side to get the large-scale
apps needed to keep these things moving forward. Also, there are questions about the overall rate of progress in science...


In predicting the future of artificial general intelligence (AGI), as
in predicting other technologies, it's also important to think hard
about the larger cultural
and social context. I've seen that over and over again, in 24 years at
NSF (not representing NSF), across a wide variety of fields.

The US is going through a pretty tough time right now, in many ways.
(The US, EU and China are not even out of the woods yet
on the threat of Great Depression II happening within a couple of years.)

When I do think about what's going on, I do sometimes think about
Atlas Shrugged or the Book of Revelations (as do many people),
but I also think about Spengler's Decline of the West, which was a
bigger part of my own childhood upbringing. (I was lucky
to go to a school where lots of people were into Toynbee, but being
proud of German roots, I did have to read the German version of the
just as I much preferred Goethe's Faust over Marlowe's character who
reminds me more of the Irish side of the family and Paul Ryan.)
The real history, from Toynbee or Spengler or other sources, makes it
abundantly clear that technology does not always move forwards.
I have seen the same myself, right down in the trenches, even in very
hard core science and technology.

Spengler argued that the core living culture of a society, like the
green wood of a tree, tends to die off much sooner than than the hard,
technology which that culture initially produced. And so... the
EXISTING successes of folks like LeCun and Ng seem well-posed to
migrate into
Google. Google and its best competitors seem well-poised to thrive for
a very long time...but what about the next big things?

A lot of the next big things do come to NSF. Again, not speaking for
NSF, I do see some things here. The little division where I work
has about ten technical people, responsible for the advanced, next
generation basic research all across computational intelligence
(including COPN), electric power, chips, photonics, wireless
communications, nanoelectronics (and the biggest single chunk of
nanotechnology in general),
and a lot of advanced sensors and so on. It used to be that NSF only
accounted for about 30% of university basic research in such areas,
because of basic research from places like DARPA and NASA and the
triplets of ONR/AFOSR/ARO. (Four of us six are now
within just a few blocks of each other in Virginia.) But Tether made a
massive shift at DARPA, Griffin at NASA, and 6-1/6-2 came under
heavy pressure in a lot of places elsewhere, which in any case had a
very focused agenda. As it happens, both Tether and Griffin
shifted more of the money of their agencies to Big Stakeholder corporations.

This was one of the reasons why I saw funding rates in my area drop
practically overnight in the "Halloween massacre" of 2003, down from
30% to 10%. If the number of people working in a field has a
connection to the amount of money... one might expect the underlying
communities to shrink by a factor of three, as discouragement and such
work their way through the system. But other factors were at work.
Also, folks in Congress concerned about the erosion of America's
technological strength (which depends not only on new ideas
but on students produced by university faculty in these fields)
promised to fill in the vacuum, bit by bit.

For myself, I had to zero out at least two emerging areas as a result
of the Halloween massacre -- advanced control for
plasma hypersonics, and quantum learning systems (which we had
pioneered for a bit). We learned a lot from those efforts,
but who knows what will survive? The latest crash of X51A is an
example of what DOD is now doing on more uncorrected politically
correct lines.

Perhaps the longest-lead indicator of new directions is the mix of
what we see in CAREER proposals, which I looked at
just yesterday. They are our future.


But at the end of the day, I keep telling myself that I shouldn't
worry so much about the low rate of progress
(or negative progress) towards hardware AGI. After all, there was that
Terminator risk, which was MUCH closer
to reality than most folks would imagine.

Maybe the most important value of this work is how it can inform our
understanding and use
of our own natural intelligence (as in one of my papers this month in
Neural Networks).

But then again, we could really use more intelligent logistics
systems, power grids and financial market systems
(as partially reflected in one of the other papers), if we can
understand and avoid the risks
more seriously than we seem to yet. My wife sometimes says I should
just go ahead, retire from NSF
and build some of these systems myself -- but there are other things
to think about. Like what to do realistically
about those risks, as in Bernanke's recent question: "How can we
arrange things so that this giant machine we are
building really serves the larger goal of serving human happiness, and
not just producing some new kind of slavery?"

Best of luck,


Wednesday, August 15, 2012

what is at stake in climate policy?

The website,, currently posts five issues with questions
of interest to key decision makers.

One question they ask is ==========================

Despite a comprehensive climate bill passing the House in 2009, the enactment of some policies that may reduce CO2 emissions over the long-term, and continuing Congressional discussion of a clean energy standard, it seems that clear paths forward on climate change are not emerging.

Nevertheless, in spite of some debate as to the phenomenon’s authenticity, climate change does not seem to be going away. June 2012 was the 328th straight month above 20th Century temperature averages, Greenland’s glaciers are melting more than ever..
How would you characterize the state of the climate discourse? What are the stakes of acting, or not acting, to mitigate climate change? Is there a politically viable, effective path forward on this issue?


My response (one of the comments you can see on that website):

NOT representing NSF — I had a whole lot of contact with both sides of the usual debate when I worked for an office attached to the EPW committee of the the Senate in 2009.
Regarding the stakes — I am worried that the polarization of the debate is blinding us to the really biggest risks, where we need more information but our very survival might be at stake.
People urging inaction have often said “The world has done perfectly well for a billion years with CO2 levels much higher than what we see today.” That’s true … but there were about a dozen times when “doing well” has meant mass extinctions. In one of them,
90% of the species on the land of the earth were totally wiped out; all but one of the protomammals went extinct. H2S and radiation (from ozone depletion) reached levels so high that every single human on earth would be killed if it happened again. One of the world’s top experts on this event, Peter Ward, believes that we are well on course to repeating that same kind of event.
I agree with some critics that Ward’s book, Under a Green Sky, panders a bit too much to some of the usual CO2 politics, but his primary sources do check out, and there is good reason to worry. It all seems to hinge on changes in deep ocean currents, which are a kind of mirror image of the Gulf Stream debate led by Bryden of Southampton a few years ago. NOAA is beginning to do a few pilot studies on how to use satellite data to keep track of these currents in real time, for the Pacific ocean, but we need to know what is happening in other areas (like the North Atlantic) and we need new empirical models in order to have any kind of advance warning of whether such an extinction phenomenon could be as close as Ward asserts. It could conceivably be even closer, based on the limited circumstantial evidence at hand — like faster warming of the Arctic than of the tropics, occasional signs of sputtering in the Gulf stream, and no shortage of nutrient
runoff to cause anoxia. The Black Sea already has a poison rich zone, which has risen steadily over the past few decades, and I am surprised I do not see more analysis of what might happen when it reaches breakout.
Regarding response — that’s extremely important, but this comment is already a bit long. Certainly I agree with Ward that consideration of the long history of the earth is a key input to trying to understand the ocean currents better… including possibilities for tipping points.

Tuesday, August 14, 2012

misuse of nuclear: still #1 threat of human extinction

At the Lifeboat Foundation, there has been a discussion of the questions: "What is the probability that humans will continue to exist at all for now? What are the biggest threats of human extinction?" I said that misuse of nuclear technology is number one on my list.... and explained a bit more this morning.


The metaphor of "deadwood and spark" can be helpful for folks who are
really trying to understand this kind of complex system,
and not just looking for a convenient hole to bury their head in.

As I mentioned, I recall a major electric power blackout (hitting the
whole Northeast) where top engineers discussed at length
what "the cause" was. It was a very simple blackout, but even the
simplest realistic explanation involved two "causes" --
the "deadwood" of a vulnerable grid, and a "spark" which was basically
just a tree falling on a power line. (Even the spark was not entirely
There were questions about what the rules are for pruning trees near
power lines.)

Likewise, with forest fires, the accumulation of dry deadwood is a
crucial cause. Some forest managers say
we shouldn't just put campers in jail for providing the initial spark,
since even lightning will cause a fire when the deadwood accumulates
But I personally would prefer that we mandate cars flexible enough
to use the kind of fuel which is easily made by
converting that dry scrub wood into biofuel a bit like wood alcohol;
it's a case where opening up a market could solve the problem
rather nicely.


The future explosive growth of bomb-grade materials and technology and
people willing and able to use them, all over the world...
that growth has begun, but has not YET become a kind of tidal wave,
and there is hope that we could prevent it (e.g. by
doing better to improve and accelerate renewable electricity)...

But no, having tons and tons of deadwood all over the world is not the
same as having a forest fire. For that you also need one more
ingredient -- a spark. And some folks would ask: "where is the precise
location of that spark going to come from?"

I can imagine someone asking that question, standing in an open meadow
pointing towards a huge forest clogged with
dry scrubwood.. and also pointing inadvertently towards a sky just
full of giant thunderclouds about to emit lightning.
Where will a spark come from, with all those clouds and all that
lightning? I cannot say precisely where,
but I can say that there will be more than one spark out there, at the
rate things are moving. And I really wish we could hurry a bit
in clearing out that deadwood... or at least, reducing its speed of

Where is the first spark going to come? North Korea? Iran and Israel?
Kashmir? DC? New York? Beijing? Moscow?
All of the above? We don't need precision to know we will be a lot
safer if we can reduce the threat....


With regards to DC and New York...

All of us on this list know that even ... would not REALLY
just send a thank you message and a
box of champagne and cigars to Al Qaida if nuclear weapons of any kind
went off in those cities.
Indeed, the Fox news types would be the first to demand instant
retaliation against any possible plausible guilty party
(except of course the owners in Saudi Arabia, aka "our job creators")
with or without clear evidence.

We all know that -- yet, as ... has pointed out, it doesn't help
if right wing rhetoric announces to the whole world that
they would actually be happy in such an event.... It reminds me of
what Saddam Hussein once said, that he actually thought that US
diplomatic signals were saying he should feel free to take over Kuwait.

Best of luck. We really do need it.

P.S. And yes, we do of course also need to pay attention to sparks and such.

Friday, August 10, 2012

US side of the Depression threat: small update

Concerns about the risk of Great Depression II involve three batches of things
about to hit the fan -- stuff in Europe we've talked about before, stuff in China
which is leaking out into the Financial Times, and, in the US, the fear that
abrupt action to reduce debt will be done in a way which raises unemployment
and creates negative debt.

I received an interesting piece today about the sequestration threat which is due to hit in full force in January (with many pink slips to come at about election day). I saw
an estimate somewhere that unemployment may tick up to about 9.5 percent immediately.
What I saw today was a more optimistic piece, citing a more bleak assessment:

The optimistic piece seems to be assuming "Well, if government spending in
some sector is reduced by X dollars, total demand won't go down by X; there must be balancing effects, such that it goes down by X/2, or even .9X -- a kind
of natural adaptation."

That's really sad. Economists have their arguments, but empirical studies have verified again and again that there are multiplier effects -- that the reduction in total demand would be more like 3X.

That's if we do things the dumb way. However, current news does indeed suggest we are on path to doing things the dumb way, in part because partisan warfare makes it hard for people to act on analytical and creative approaches to minimizing the risk. But... that doesn't justify giving up ...