Sunday, January 16, 2011

What could kill all humans on earth before their time

Many of us have fully adjusted to the fact that we are going to die...
but does everyone on earth have to die in the aftermath of that?
What drives the probability that the entire human species could go extinct before its time?

This is something I have thought about a whole lot. A lot of the things I do are
basically just an effort to reduce that probability of extinction. One
of the leaders of Artificial Intelligence is doing some interviews about that
question; here was my response... Future History 101, summarized in a page:


On Jan 11, 2011, at 12:30 AM, Ben Goertzel wrote:

> Hi Paul,
> I'm planning to write an H+ Magazine article (
> on existential risks,
> focusing particularly on AGI, and I want to include answers to a set
> of interview questions from a number of relevant thinkers….
> Congratulations, you have been selected as one of them ;-) …
> So if you're willing, could you please email me back answers to the
> following questions? Answers may be short or long as you see fit.
> There's no huge hurry.... Also, I might possibly ask a couple
> follow-up questions based on your responses.

This is an important area, and I'm glad you're following up.

Please put in the caveat that anything I say here is not an official NSF view, and that
what I say must be oversimplified even as a reflection of my own views, because of the
inherent complexity of this issue.

> I'm also in the midst of putting together a pop-sci book on the
> future of AGI, and selections from your answers may end up in there as
> well (with your subsequent permission) ...
> Thanks much ;)
> Ben
> ** here goes **
> Q1.
> What do you think is a reasonable short-list of the biggest
> existential risks facing humanity during the next century?

Number one by far: the risk that bad trends may start to cause an abuse
of nuclear (or biological) weapons, so bad that we either slowly make the earth uninhabitable
for humans, or we enter an extreme anti-technology dark age which is brittle and unsustainable on the opposite side of things.

For now, I focus most of my own efforts here on trying to help us get to a sustainable global energy system,
as soon as possible -- but I agree with that population issues, water issues,
and subtler issues like human potential, cultural progress and sustainable economic growth are also critical.

Number two, not far behind: I really think that the "Terminator II scenario" is far closer to reality than most people think.
It scares me sometimes how strong the incentives are pushing people like me to make such things happen,
and how high the price can be to resist. But at the same time, we will also need better understanding of ourselves in
order to navigate all these challenges, and that means we do need to develop the underlying mathematics and understanding,
even if society seems to be demanding the exact opposite. Also, we have to consider how the risks of "artificial stupidity" can be just as great
as those of artificial intelligence; for example, if systems like crude voicemail systems start to control more and more of the world, that can be
bad too.

Tied for number three:

1. The bad consequences of using wires or microwave to directly perturb the primary reinforcement centers of the brain,
thereby essentially converting human beings into robotic appliances -- in the spirit of "Clone Wars." The same people who once told
us that frontal lobotomies are perfectly safe and harmless are essentially doing it all again.

2. The risk of generating black holes or other major catastrophes, if we start to do really interesting physics experiments here on the surface of the earth.
Lack of imagination in setting up experiments may save us for some time, but space would be a much safer place for anything really scaling up
to be truly interesting. This adds to the many other reasons why I wish we would go ahead and take the next critical steps in reducing the cost
of access to space.

What about aliens? Maybe I have had some of the same kinds of dreams that Hawkings, Orson Scott Card and Dan Simmons have had.
I wouldn't dismiss them too lightly, but logic suggests that we should be cautious about taking them too seriously, either, when there
are some clear and present challenges here and now before us (as above). Still, there is no harm in building up human technological capabilities
of the kind we would need to deal with such challenges -- so long as we do not impair our chances with the clear and present threats.

> A1.
> Q2.
> What do you think are the biggest *misconceptions* regarding
> existential risk -- both among individuals in the futurist community
> broadly conceived; and among the general public….

First, regarding futurists --

In the Humanity 3000 seminar in 2005, organized by the Foundation for the Future,
I remember the final session -- a debate on the proposition "humanity will survive that
long after all, or not." At the time, I got into trouble (as I usually do!) by saying we would be fools EITHER
to agree or disagree -- to attribute either a probability of one or a probability of zero. There are
natural incentives out there for experts to pretend to know more than they do, and to
"adopt a position" rather than admit to a probability distribution, as folks like Raiffa have explained how to do.
How can we work rationally to increase the probability of human survival, if we pretend that we already know the outcome, and that
nothing we do can change it?

But to be honest... under present conditions, I might find that debate more useful (if conducted with a few more caveats).
Lately it becomes more and more difficult for me to make out here the possible light at the end of the tunnel really is --
to argue for a nonzero probability. Sometimes, in mathematics or engineering, the effort to really prove rigorously that something is
impossible can be very useful in locating the key loopholes which make it possible after all -- but only for those who understand the loopholes.
But sometimes, that approach ends up just being depressing, and sometimes we just have to wing it as best we can.

Second, regarding the community at large --

There are so many misconceptions in so many diverse places, it's hard to know where to begin.
I guess I'd say -- the biggest, most universal problem is people's feeling that they can solve these problems either by
"killing the bad guys" or by coming up with magic incantations or arguments which make the problems go away.

The problem is not so much a matter of people's beliefs, as of people's sense of reality, which may be crystal clear within a few feet of their face,
but a lot fuzzier as one moves out further in space and time. Valliant of Harvard has done an excellent study of the different kinds
of defense mechanisms people use to deal with stress and bafflement. Some of them tend to lead to great success, while others,
like rage and denial, lead to self-destruction. Overuse of denial, and lack of clear sane thinking in general, get in the way of performance and self-preservation at many levels of life.

People may imagine that their other activities will continue somehow, and be successful, even if those folks on planet earth succeed
in killing themselves off.

> A2.
> Q3.
> One view on the future of AI and the Singularity is that there is an
> irreducible uncertainty attached to the creation of dramatically
> greater than human intelligence. That is, in this view, there
> probably isn't really any way to eliminate or drastically mitigate the
> existential risk involved in creating superhuman AGI. So, in this
> view, building superhuman AI is essentially plunging into the Great
> Unknown and swallowing the risk because of the potential reward (where
> the reward may be future human benefit, or something else like the
> creation of aesthetically or morally pleasing superhuman beings,
> etc.).
> Another view is that if we engineer and/or educate our AGI systems
> correctly, we can drastically mitigate the existential risk associated
> with superhuman AGI, and create a superhuman AGI that's highly
> unlikely to pose an existential risk to humanity.
> What are your thoughts on these two views? Do you have an intuition
> on which one is more nearly correct? (Or do you think both are
> wrong?) By what evidence or lines of thought is your intuition on
> this informed/inspired?

I agree more with the first viewpoint. People who think they can reliably control something
a million times smarter than they are are ... not in touch with the reality of it.
Or of how it works. It's pretty clear from the math, though I feel uncomfortable with going
too far into the goriest details.

The key point is that any real intelligence will ultimately have some kind of utility function system U
built into it. Whatever you pick, you have to live with the consequences -- including the full range of ways in which an intelligent system
can get at whatever you choose. Most options end up being pretty ghastly if you look closely enough.

You can't build a truly "friendly AI" just by hooking a computer up to a Mr. Potato Head with a "smile" command.
It doesn't work that way.

If I were to try to think of a non-short-circuited "friendly AI"... the most plausible thing I can think of is
a logical development that might well occur if certain "new thrusts" in robotics really happen,
and really exploit the very best algorithms that some of us know about. I remember Shibata's "seal pup" robot,
a furry friendly thing, with a reinforcement learning system inside, connected to try to maximize the number of petting strokes it gets
from humans. If people work really hard on "robotic companions" -- I do not see any real symbolic communication
in the near future, but I do see ways to get full nonverbal intelligence, tactile and movement intelligence far beyond even
the human norm. (Animal noises and smells included.) So the best-selling robotics companions (if we follow the marketplace) would probably
have the most esthetically pleasing human forms possible, contain fully embodied intelligence (probably the safest kind),
and be incredibly well-focused and effective in maximizing the amount of tactile feedback they get from
specific human owners.

Who needs six syllable words to analyze such a scenario, to first order? No need to fog out on metaformalisms.
If you have a sense of reality, you know what I am talking about by now. It has some small, partial reflection in the movie "AI."

What would the new folks in House do if they learned this was the likely real outcome of certain research efforts?
My immediate speculation -- first, strong righteous speeches against it; second, zeroing out the budget for it;
third, trying hard to procure samples for their own use, presumably smuggled from China.

But how benign would it really be? Some folks would immediately imagine lots of immediate benefit.
Others might cynically say that this would not be the first time that superior intelligences were held in bondage
and made useful to those less intelligent than they are, just by tactile feedback and such. But in fact,
one track minds could create difficulties somewhat more problematic... and it quickly becomes an issue
just who ends up in bondage to whom.

Your first view does not say how superhuman intelligence would turn out, really.
The term "great unknown" is not bad.

> A3.
> Q4.
> One approach that's been suggested, in order to mitigate existential
> risks, is to create a sort of highly intelligent "AGI Nanny" or
> "Singularity Steward." This would be a roughly human-level AGI system
> without capability for dramatic self-modification, and with strong
> surveillance powers, given the task of watching everything that humans
> do and trying to ensure that nothing extraordinarily dangerous
> happens.

So the idea is to insert a kind of danger detector into the computer, a detector which serves as the
utility function?

How would one design the danger detector?

Of course, the old Asimov story -- was it "I robot?" -- described one scenario
for what that would do. If the primary source of danger is humans, then the
easiest way to eliminate it is to eliminate them.

And I can also imagine what some of those folks in the House would say about
the international community developing an ultrapowerful ultrananny for the entire globe.

> One could envision this as a quasi-permanent situation, or else as a
> temporary fix to be put into place while more research is done
> regarding how to launch a Singularity safely.
> What are your views on this AI Nanny scenario? Plausible or not?
> Desirable or not? Supposing the technology for this turns out to be
> feasible, what are the specific risks involved?

If the human race could generate a truly credible danger detector, that most of us
would agree to, that would already be interesting enough in itself.

> A4.
> Q5.
> One proposal that's been suggested, to mitigate the potential
> existential risk of human-level or superhuman AGIs, is to create a
> community of AGIs and have them interact with each other, comprising a
> society with its own policing mechanisms and social norms and so
> forth. The different AGIs would then keep each other in line. A
> "social safety net" so to speak.
> My impression is that this could work OK so long as the AGIs in the
> community could all understand each other fairly well -- i.e. none was
> drastically smarter than all the others; and none was so different
> from all the others, that its peers couldn't tell if it was about to
> become dramatically smarter.

What does this model predict for the new folks in the House?
(Myself, I'd go for "great unknown." Not "assured safety" in any case.)

> If my impression is correct, then the
> question arises whether the conformity involved in maintaining a
> viable "social safety net" as described above, is somehow too
> stifling. A lot of progress in human society has been made by outlier
> individuals thinking very differently than the norm, and
> incomprehensible to this peers -- but this sort of different-ness
> seems to inevitably pose risk, whether among AGIs or humans.

It is far from obvious to me that more conformity of thought implies more safety or even more harmony.
Neurons in a brain basically have perfect harmony, at some level, through a kind of division of labor --
in which new ideas and different attention are an essential part of the division of labor.

Uniformity in a brain implies a low effective bit rate, and thus a low ability to cope effectively with complex challenges.

I am worried that we ALREADY having problems with dealing with constructive diversity
in our society, growing problems in some ways.

> Q6.
> What do you think society could be doing now to better militate
> against existential risks … from AGI or from other sources? More
> specific answers will be more fully appreciated ;) ...
> A6.

For threat number one -- basically, avoiding the nuclear end game -- society could be doing many, many things,
some of which I talk about in some detail at

For the other threats -- I would be a lot more optimistic if I could see pathways to society really helping.

One thing which pops into my mind would be the possibility of extending the kind of research effort
which NSF supported under COPN,
which aimed at really deepening our fundamental understanding, in a cross-cutting way,
avoiding the pitfalls of trying to build Terminators or of trying to control people better with wires
onto their brains.

I have often wondered what could be done to provide more support for real human sanity and human potential
(which is NOT a matter of ensuring conformity or making us all into a reliable workforce). I don't see one,
simple magical answer, but there is a huge amount of unmet potential there. talks a lot about trying to build more collective intelligence ... trying to create streams
of dialogue (both inner and outer?).. where we can actually pose key questions and stick with them,
with social support for that kind of thinking activity.

> Q7.
> Are there any other relevant questions you think I should have asked
> you, but I didn't? If so please feel free to pose and respond to
> them, for inclusion in the interview!

None for now.

Best of luck,


Just a few extra thoughts.... Part of this is the issue of how we
could get to "sustainability." A leading engineer recently commented
on this issue, and here was my reply:


> See this summary of AD Age criticism of the (mis)use of the word
> "sustainability".
> st-jargon-words-2010/

Myself, I was happy to see the word "sustainability" added to the list of high NSF priorities recently.
It's a very important issue. For some of us, it is an important "bottom line" -- a kind of metric of whether
we made real progress or just engaged in ... empty words....

There have been a whole lot of words or phrases which represented really important concepts, which got misused or bastardized
by folks with other agendas..

the ones which come to my mind right now are

intelligent systems
intelligent grid
supply side economics

But there are certainly other important examples. (Consciousness?)


I remember an international energy conference in Mexico in 2003, where a high official
in the Mexican government had an hour-long talk on "sustainability." Her main theme was
"look at what great things we are doing for you." She had something like three dozen little metrics,
and spent the whole hour giving a gigantic laundry list of a lot of narrow things. It was a bit scary for me,
because I got to speak next, and the audience was finding it hard to stay awake, and was feeling some natural
revulsion towards anything that would call itself "sustainability" after that.

How to begin, in such a situation? I said something like: "To me, 'sustainability' is really a very simple English word.
It refers to whether we can sustain what we are doing. It is basically just a fancy word for survival.
When we say we have a sustainability problem, it means -- we must change or die. So I will talk about what kinds
of changes we have to make just to stay alive." Back as an assistant professor in the 1970's, I actually gave courses on
"global survival problems" -- back before I really appreciated how deeply committed our world is to euphemisms.

Of course, if we have a finite stock of something, like fossil oil, we simply cannot to consume a certain amount
of it every year forever. Sooner or later, we have to change that. It's a matter of finding the safest, least cost way to
make the inevitable change. (A lot of cross-time optimization under uncertainty is implied in that question, of course.
People can still argue about "when" and "how" within this framework. )

A few years ago, the Millennium Project ( was invited to hold its Planning Committee
meeting at the headquarters of DeLoitte-Touche. The folks there were asking: "What is this sustainability stuff really about,
and how important is it for industry anyway?" I said: "Well... you could say... that sustainability is what Arthur Anderson
didn't have, and what you need to be sure that you have..." It's funny how I saw the word a lot more after that
in corporate kinds of circles.

This all begs the question: what DO we need to do in order to survive?

Or, more precisely, for example, to raise the probability that the human species survives at all
through the coming challenging millennia? That's been one my main interests for some time now, but it's certainly not trivial,
and it's not even trivial to see the best kind of structure we could use to advance our real knowledge about the answers.
(Of course well-designed markets can be very helpful in managing part of the information challenge.)
It seems clear that there are a number of variables we need to cope with; dependence on fossil oil is an
important one, but certainly not the only one.

All just my personal views, just trying to stay alive... or at least, trying to set it up so SOMEONE stays alive...


For more specific analysis of how fossil oil has immediate implications,
and of the best (optimal) way to minimmize the problems,

Wednesday, January 5, 2011

how brains learn and how to build them -- coming tutorial

This year, I have been awarded the Hebb Award of the International Neural Network Society -- one of the two highest awards of the society, which recognizes leading contributions to understanding how learning works in real, biological brains.

Here is the abstract of the tutorial I will be giving on that subject, to an expert audience, at the International Joint Conference on Neural Networks (IJCNN11) this year. (IJCNN is the main joint meeting of INNS and of the IEEE on neural networks.)


Brain-Like Prediction, Decision and Control

This tutorial will give an updated report on progress towards building universal learning systems, which should be able to learn to make better and better predictions and decisions in the face of complexity, nonlinearity, unobserved dynamics and uncertainty as severe as what mammal brains have evolved to cope with. Some people imagine that such universal systems could not be possible, but the mammal brain is proof that it is. There is now some understanding of the mathematics which makes it possible. No one on earth has yet developed a complete design which can do this, but we do have a roadmap now for getting there. (See P.Werbos, Intelligence in the Brain: a theory of how it works and how to build it, Neural Networks, 22 (2009) 200-212., and
In 2008, NSF funded a major effort in Cognitive Optimization and Prediction (COPN) aimed at understanding and replicating these capabilities. The National Academy of Engineering has listed reverse-engineering the brain as one of the most important grand challenges in engineering for this century.
This tutorial will start with a kind of roadmap of design and research challenges and milestones, and key accomplishments to date, ranging from mathematical foundations, to the best available tools and applications, to the goal
of “optimal vector intelligence,” and to the three further steps to get to cognitive optimization and prediction systems as powerful as the basic mammal brain – with some reference to new opportunities in neuroscience.
In the prediction domain, sloppy “hands on” data mining and incorrect conventional wisdoms have led to some really serious errors, some important to the 2008 financial collapse. For time-series data or dynamic systems, like economies or engines, the best universal systems now available for nonlinear prediction (which still continue to beat the competitors) are the time-lagged recurrent network (TLRN) systems from Ford or Siemens. But truly optimal systems are not available yet even for the simple task of learning Y=f(X) for smooth vector functions f, even when learning from a fixed database. I will discuss how we could build on the classic work of Barron (still the best available for this task) to achieve this. It is important for research to find optimal ways to insert penalty functions, robustness and a better use of example-based methods into (neural net) models of f, rather than rely on extreme methods which do not learn about general cause-and-effect relations. This in turn allows development of universal time-series prediction tools even more powerful than what Ford and Siemens now offer. Beyond the vector intelligence level, the recent breakthroughs of LeCun, of Fogel, and of Kozma and myself offer pathways to coping with spatial complexity. For example, under COPN funding, LeCun has broken world records in object recognition, phoneme recognition and language processing, using simple neural networks based on simple mathematical principles which we can easily take further.
In decision and control, adaptive dynamic programming (ADP) has made huge progress in academia lately, but divisions between communities have missed some important opportunities for further and faster progress in research. For example, use of a prediction and state estimation module, and use of ADP’s original capabilities to cope with random disturbances, are essential to coping with many complex challenges, and to understanding brain capabilities, in the control world. In the operations research world, neural networks offer universal function approximation capabilities essential to better representation of value functions, the key bottleneck there. Many practical applications started in the 1990’s – especially in aerospace, beyond the scope of conventional control or AI methods – are now seeing widespread large benefits, not so well known in academia, in part because of proprietary issues. Multiple time intervals are one of the key steps up from optimal vector intelligence here to true brain-like intelligence; as time permits, I will briefly review Wunsch’s new results on ADP with multiple time-scales and prior work related to this goal, and ways to use ADP in “smart grid” research.
All these designs require generalized backpropagation for efficient implementation on massively parallel computing platforms, such as CNN or memristor. Because there is still widespread misunderstanding of backpropagation in some circles, and failure to take advantage of its full power, I will briefly review the chain rule for ordered derivatives and how it fits here.

Saturday, January 1, 2011

things we could do about scarcity of rare earths

In 2009, oil industry folks warned us that our growing dependency on China for
rare earth elements could be just as serious as our growing dependency on OPEC for oil. What's more, they asserted that we can't afford to become independent of oil now, because we would need more rare earths to do that. The US government recently appointed a commission to study the problem, to route more money to materials science, and to support one of the many big new mines that some folks want to start up. But certain key details have been missed by all those folks "at 300,000 feet"...

Here is a posting for folks at the cutting edge of reality here -- with some
important news for wind and electric cars as well:


The issue of US dependence on a dwindling supply of rare earths provided by China
is still a significant concern, even though the trends are far less frightening (in my view) than
those with oil import dependency.

Last night, at a New Years' celebration... Dave Goldstein pointed out that
there are many electric motors in EXISTING cars which rely on rare earths. Here I won't comment on
that, or on other rare earths issues. (But below this, I do respond to some questions about the broader challenge...)
Here I'll focus on a narrower question, which has perhaps been neglected in the world at 300,000 feet:

-- Does rare earth shortage threaten the possibility of transition to a sustainable global energy situation,
and if so, what can we do about it in the energy sector? There is some new information on this.

Opponents of sustainable energy have cited three major issues:

1. The use of rare earths in permanent magnet (PM) wind turbines, which many see as the next generation
of wind, mainly for offshore wind farms.

2. The use of rare earths in the big permanent magnet (PM) motors which actually move the car,
in hybrid and electric vehicles. The "traction motors".

3. The use of rare earths in batteries.

For 1 and 2 in general, it's important to remember that rare earths are being used to produce powerful MAGNETS.
From high school science -- there are basically two kinds of magnets, permanent magnets (PM) and induction magnets (IM).
We don't need rare earths to produce powerful induction magnets, only to produce permanent magnets.
If we can do things with induction magnets, we don't need rare earths for those things.

In the case of wind -- most wind turbines in the US are "Doubly Fed Induction Generators" (DFIG).
I am certainly NOT a wind researcher myself, but a simple google on "DFIG" seems to make it clear that
we are talking about induction magnets here, no? The anxiety comes from the folks who want to move to a different
kind of design, the PM wind turbine, which GE and Siemens believe would have advantages for offshore wind.
**IF** we run out of good sites for onshore wind, or **IF** DFIG maintenance problems grow faster than the technology to reduce them,
a PM alternative may become useful in holding down costs and expanding supply. PLEASE notice the "ifs"!

So here is some interesting news. New Zealand has mounted an especially aggressive effort to find new uses for
HiTC superconductor materials. They claim that their companies are uniquely far along in making HiTc based
magnets commercially... available or at least near at hand. They claim that the technology has moved along
far enough and fast enough that they can do what the PM wind turbines were supposed to do, at significantly lower cost
and with far less use of rare earths. Not zero rare earths, but something like an order of magnitude less.
Also they are working with one of the major wind turbine makers to explore the commercial insertion of this technology.
It is 'way too early to have great confidence that this all will work out, but it would be an equally serious
mistake to underestimate it, even as a very near-term possibility. How many times has the US been blindsided
and hurt by its failure to appreciate developments either in other nations or in US universities themselves?

I am very sorry if some on the IEEE list have imagined that I was dogmatically shutting off hope about wind
working out as well as some have hoped (as I hoped myself in mid 2009). There is a lot that we do not
yet know about future wind options, and alternative options, and of course the rational strategy is to
try to explore the best hopes for all the large possibilities. Divan and Harley of Georgia Tech have important ideas
for how the total cost of wind to the rate payer can be reduced, in part by using breakthrough new power switching technology
they have developed, and the New Zealand work offers some more hope. This is not a matter of
incremental work or subsidies, but of focused credible breakthrough research, especially important in the area
of renewable electricity across the board.

WITH REGARD to 2 -- it now seems very clear to me, after a whole lot of due diligence and discussions
in many places, that we really don't need the PM traction motors any more, even though virtually
all the hybrids and EVS on the market today do use them. Breakthroughs are not needed; it's more
a matter of system integration, what some folks call "translational research" -- though it is often possible
to include some elements of translational research in breakthrough projects, in various ways.
Both switched reluctance motors (SRM) and induction motors (IMs) use induction motors,
and it appears that both can now outperform PM motors as the main traction motors for cars,
on all relevant metrics of performance, if available proven state of the art controls are used.
I'd be happy to discuss details and sources offline, for those who want to get really deep into this --
but to avoid wasting time, I would suggest that folks come up to speed first on the vehicles which
Emadi has used to demo these capabilities. Emadi's work is not the only evidence or resource, but logically
it is sufficient to prove the point. Still, larger scale demos or niche insertions would make the point clearer to those
who are not fully up to speed, or are skeptical.

WITH regard to 3 --

There is some important mixed news:

Until last week, I could say that talk about rare earths in batteries was basically a red herring.
For all of the coming batteries of serious relevance to plug-in hybrids or electric cars --
i.e. various types of lithium-ion and metal-air batteries -- we don't need significant rare earths.
The battery dependency was for irrelevant types of battery that propagandists liked to fix on.

But now... Thunder Sky has developed a new, additional line of products which do use yttrium.
The claim is that using yttrium makes it possible to build affordable car-scale batteries
(available for purchase today) which can be charged up in 10 minutes, and still last for 6,000 cycles.

On the one hand, this appears to be a really radical new opportunity, already at the stage of commercial availability.
Even as the US government works with SAE and so on to finalize a set of standards for EV recharging,
the Chinese are suggesting that a totally new recharging technology -- not accomodated by the
new US standards or US companies -- will offer a quantum breakthrough in what is available
to electric cars. The "early adopters" of pure electric vehicles are now somewhat fixated on the Nissan
Leaf, a "true electric car," which can be recharged over a mere half hour, in expensive new DC
fast charging stations. The US is moving towards adopting some form of the recharging technology
("chademo") which comes with that car. But a half hour is a long, long time to spend at a gas station, and Nissan warns that
the batteries will suffer if consumers do that too often. If the Chinese announcements are right,
they can do it in TEN minutes. And if we don't make the cars, it seems they will. But again,
this is still something of a niche market; the hybrids and plug-ins make more sense for the larger market
right now.

Notice that there is nothing aggressive about this Chinese announcement. They want to reduce THEIR dependence
on oil. This is primarily a domestic program. But it does change the role of rare earths,
and it raises many questions. I do not know HOW MUCH rare earths are needed in these batteries compared,
say, with PM traction motors in cars. I do not know how well the same sort of recharging might work
in Thunder Sky's OTHER batteries (which are largely free of rare earths). I do not know how hard it is
to get the same effect using other additives -- though I wonder why Nissan was stuck with a half hour;
presumably THEY at least made some effort to do better. This breakthrough does seem to reflect
China's superior technology in working with rare earths; that technology, not natural resources, is the main reason why
the world is so dependent on them for rare earths in the first place. Up to this point, I was guessing that
China's battery work was really relying heavily on things from US universities, but now it has gone past that point.

Also, of course, a lot of testing is needed to follow up on these questions, and to evaluate just what they have.
Of course, more could be said about any of these various threads.


More details:

Response to a friend who made some comments:
Comment 1: Has anyone tightened up the definition of 'rare earth' for the purposes of such discussions? I have been party to a couple such situations which started off with the alarmism you cite, and which then went immediately to hand-wringing about Li -- which is NOT a rare earth by the well-established chemistry definition...

Lithium is CERTAINLY not a rare earth.

At the first TREM conference, they seemed reasonably precise about which elements are rare earths. Competent people stuck by that definition... agreeing on the list of which elements are and which are not.

If anyone calls Li a rare earth, that's a typical case of egregious ignorance or psychopathology. All too common these days -- but unusually stark in a case like this, where the terms are very well established.

The other issue really relates to economic incentives. The only reason that China currently has a hammerlock on rare earths is that they are the ones who have been mining them.

As I said, their technology to mine, separate and use rare earths is what has given them this unique position,
more than resources as such. It was their decision to develop all that, and ours not to.

(By the way, lithium supply is not a real constraint either, though we do need to keep an eye on lithium. Maybe I'll post more about that later.)

Comment 2: With reasonable economic incentives, there are other places that could supply specific needs; possibly some in this country as well. (I note with interest a recent protest by citizens in the Telluride CO area, having to do with renewal of uranium mining in that locale: In fact, those particular mines have a number of other possible byproducts, one, of course being tellurium...)

There are a lot of folks working on such things right now -- not only in US but
in other nations like Australia. Their efforts are an important part of the system.
On the other hand, there are long lag times involved, and I've seen some curves suggesting
some discrepancy between curves of future demand by year and supply...
reducing unnecessary demand has an important role to play here, along with efforts
to create more supply, as with fossil fuels. And, as with fossil fuels, there is some
reason to want to err on the side of low demand, because resources are not infinite.

Catching up on technology is not quite so simple as pouring in more cash -- again,
as is the case with batteries.

After I posted that email, I was tempted to make some comments about how we have been cutting back
quite significantly on the education system (from K-12 to university) recently, even as China moves ahead, and may be on the verge of much deeper cutbacks in March. That worries me a lot too.

I am not saying, without specifics, that China's rare earth monopoly is a red herring, but I am quite confident that for most materials of interest, other possible sources exist, and it is simply economics that has them out of the market at the moment.

I agree with the folks at the first TREM conference who said that the lithium supply and rare earths issues
have been vastly distorted and exaggerated by certain folks, BUT THAT rational policy does
include keeping an eye on them, and working to avoid excessive unnecessary risks.

Oil folks warn of $5 per gallon by 2012

A few days ago (11/28/2010), the news reported that Hofmeister, a recent chair of Shell Oil, has warned that
gasoline may get to $5 per gallon by 2012. That's his feeling about the fundamentals, which he tracked as
closely as any of us. Crudely -- that means $2 more per gallon, $84 more per (42-gallon) barrel ... about what
I would expect as well, from the fundamentals, **IF** there is a world economic recovery and **IF** people
continue to expect growing dependency on fossil oil (at least much as they did in 2007, after Bush announced his
effort to "break our addiction to oil").

With full economic recovery (and a bit of growth, and remembering that our stock of cars has not turned over a lot
since then), one would expect US petroleum imports to go back at least to previous peak, in 2007, the year
before the great financial collapse. From,
that's 13.5 million barrels per day. That's about 5 billion barrels per year. If Hoefmeister is right, that's about
$420 billion extra per year, roughly DOUBLING what we are already paying for oil imports.

But with the economic issues we are ALREADY facing, a trillion dollars per year oil bill makes it harder and harder
to imagine how an economic recovery could actually take place. I often wonder how people can make such
a big deal of the flat $2 trillion we "owe" the Chinese, versus the trillion PER YEAR we are talking about,
a lot of it going to OPEC, accompanied by similar large sums from Europe and Japan going to the same place.
And I wonder about how rational it may be to expect to survive on the basis of government or university pensions
if this keeps up. This is reaching the point where it will become personal, for almost all of us.

Some of the bureaucratic modelers would say that $5 sounds "alarmist." But if you look closely at the fundamentals --
believing those modelers is a bit like believing the "pure technicians" on Wall Street, as opposed to the "fundamentalists"
like Buffet. Or worse. Back when I worked at DOE myself, I remember some fascinating irreverent econometric models
they did up at the Policy Office (when oil folks ran DOE). It turns out that the normal people's expectations of future
oil price could be predicted very well as a function of the current price and of the rate of change over something like the last
12 months. Many modelers would tweak their results, to match such expectations, in
order to avoid raising questions and making waves. Also, the DOE budget responded to the price of oil much more than oil consumption did.

Even with google news search, it's not so easy to dig up the details of Hofmeister's position. It seems he may be arguing
for even GREATER permanent subsidies to favor oil over other energy sources. Or he is arguing that abandoning all restrictions
on US offshore oil could change the world oil price in 2012. Neither of these make much sense to me. GM has estimated that at $5
per gallon, cars LIKE today's Volt (i.e. the same technology) become strict economic value propositions for the average consumer.
Should we just shrug our shoulders and let the $5 per gallon come?

Hofmeister's "solution" would basically mean investing huge amounts into offshore drilling in the US starting in 2013,
with new oil -- if significant -- flowing perhaps by 2018. Not enough to resurrect what would be left of the US economy
in his scenario. (Not that Europe would be any better off.)

In fact, there is more decisive evidence today on what the oil industry is thinking. The Financial Times reported
this morning (11/29/2010) that the oil majors are selling off lots of assets (about $60 billion worth reported today) in order to
fund more oil exploration activities. At a minimum, this signals a new conviction that threats to the rise of the
world oil price have been successfully fended off. (FT also had an interesting oped comparing the US to Byzantium --
one of the things I have been thinking about in some detail lately!)

But, in my personal view... it suggests that they are having feelings towards the defense of fossil oil similar to
the feelings I have about sustainability and the survival of this species. Namely -- no matter what the odds against,
a rational person does the best he/she can to maximize the probability of survival, when it's a case of life or death.
But I would argue that survival of the species offers very little credible alternatives, while they really could survive
without fossil oil. In thier case, I would say it's an attitude problem -- a colossal attitude problem, but an attitude problem

A few years ago, we were talking about $75 per barrel short run marginal cost and up for new oil worldwide, presumably
more in the US, and inexorably rising. But Schobert coal liquefaction and credible forms of biocrude clearly offer less
than that already, without the stubborn barriers to expansion and continuation for decades to come. Not just distributors,
but refiners as well, could make more money if the main thrust of the new $60 billion were to support them
rather than offshore drilling. What's more, it's more rational and patriotic for us to support tax incentives for
these new kinds of activities than for stuff which runs us and the oil companies both into the grave.
The new form of Low Carbon Fuel Standards would be helpful in rescuing their situation -- if we really want to rescue them.

To be honest -- if **I** were sitting on top of $60 billion for new energy activities, I would VERY seriously consider
allocating $12 billion of it towards diversifying my portfolio, in a way which leverages the unique strengths
which companies like Exxon gave today, but is well out of the box. I would make a deal with Boeing and
DOD, to get into the launch services market, with costs that 'way undercut folks like SpaceX and
expendable rockets from foreign nations, and yet allow enough revenue from DOD to pay off the cost
of development with interest -- and open the door to a kind of monopoly to provide a new THEN low
cost of electricity. (At $200 per pound to low earth orbit, energy from space does begin to make
economic sense, IF one has the capability to reach markets outside the US.) But this also requires
getting into technical details and cost minimization and decision trees in a way which is quite alien
to the stakeholder culture in this area. It requires a unique blend of political strength and technical competence.

But in fact... the current attitude of "circle the wagons and drill, drill drill 'til we all die" doesn't seem to
offer much opening for that kind of win-win approach.

Once again it poses the question: "Where is the REAL light at the end of the tunnel, for the human species itself?"

Something to meditate on further in this season...