Tuesday, May 28, 2013

Could geoengineering save us from mass extinction sooner than people expect?

First, let me review the problem. There is serious reason to worry that  the Arctic will melt far enough just in a decade or two, to create something like mass eutrophication of the norther oceans of the world, resulting in the emission of H2S and related molecules which eat up the stratospheric ozone layer, and kill us all by radiation within just a few decades....

Since I want to discuss a possible way out, let me just summarize by pointing to a few recent reviews with important data embedded in them:




Perhaps I should also post a more concise summary, adding material from the Arctic folks...
but not now.

In general -- this situation reminds me of a class I took many, many years ago
under Howard Raiffa, the pioneer of decision analysis, where he reported how the elites of our world systematically underestimate both worst case and best case realities. Both our extreme hopes and our extreme fears logically deserve more consideration than we usually give them.
I don't see any way to GUARANTEE human survival in the face of these new climate issues --
but if HOPE is all we have, we should not underestimate that either.

It is quite clear that 10-20 years are not enough to turn around the CO2 and methane emissions
which drive warming in many parts of the earth. However, there is one important caveat:


It may indeed be that carbon black explains a lot of why the Arctic has been melting much faster than all the models predict. If so, a new priority to reduce carbon black quickly might be one thing which could buy us time. But how much, how fast?


But what about geoengineering?

What about trying to get the capability to  intervene, to prevent our mass extinction by direct action
on the climate even before we reduce CO2 and methane emissions?

I was really delighted to hear that the role of ideology is less implacable and monolithic that it seemed a month ago:

It is quite odd that some liberals are now attacking conservatives for being too interested in geoengineering. "It lets oil companies off the hook!" Well, we have lots of reasons to want to be able to develop nonfossil energy technology, for reasons of national security, with or without global warming. But letting the human species just die, in order to make the oil companies feel bad about what they are doing to us, is a bit too harsh of a punishment in my view -- especially since they would be punishing you and me and everyone on earth.

But -- if we decide that we are serious about geonegineering, and really want to develop the quick-response capability to be able to save our lives if the "unexpected" starts to happen..

HOW? What can we and should we do? The New York Times review (above) is really quite fuzzy,
though it's a good starting point.

They point out that a lot of geoengineering schemes would fertilize the ocean in a way which makes the eutrophication danger even worse...  And the sulfate partices would have a direct effect to make
the ultimate problem, ozone depletion, even worse. Yet if Arctic melting should be avoided that way, that benefit might be enough to offset those direct losses, maybe. Or maybe not.

But what about mirrors in space, which does not eutrophy the oceans or screw up the ozone layer?

(And also, what about geoengineering to restore the ozone layer DIRECTLY? Sadly, I have seen nothing much on that option... maybe we need to explore it further.)

The New York Times ruled on on the basis of "it sounds like science fiction." Yea, and so did airplanes in 1900. That's not what I'd call serious analysis.

So let's run some numbers...

The feasibility is largely a matter of cost, and this past week I had discussions at the International Space Development Conference with two of the world's most important experts on the feasibility of reducing launch costs.

How the rough numbers look to me now..


Lowell Wood (not a fan of space mirrors) says about a million square kilometer of lightweight,
inflatable mirror could stabilize the climate. The solar sail article cites a "sigma" of 5 grams per squatre meter,
or 0.07 for more aggressive new technology.

This past week, I ran into a very serious proposal for a new space aqccess vehicle estimated by
seroius folks to offer about $50/kg launch cost, for payloads of about a million pounds.
(It checked out on my first questions, which most such things do not, but of course it is not
a fait accompli.) At that rate, the JPL estimate of "sigma" would cost about $50 billion,
with the new vehicle... lots of launches. But the advanced solar sail material would
imply more like $1 billion (and fewer launches.) So it's within the pale. I don't know the
issues on getting to the lower weight material.

I was also intrigued by a new report of someone who flew over the Zrctic, to get data on why it is melting
and warming faster than the models predict. It sounds as if carbon black (absorbing llght/heat)
is a major factor. Lots of folks think "hey, carbon black only lasts a couple of decades or so in the atmosphere,
so why pay so much attention to it?" Well, they may be rather important decades, as it turns out.

It suggests to me that the hope of getting to the $50/kilogram option (actually, two competing credible ideas)
is worthy of more priority than I thought... when I consider how quickly the northern ocean transformation may occur,
and the importance of being ready for it.


At www.werbos.com/space.htm, I have posted a link to a summary in Aerospace America of
a more extensive design by Ray Chase to get to $200/pound-LEO, for which we had 90% chance of success using off-the-shelf-technology in about 5 years, using the technology we had at the time.

That's still on the table.

But for $50/kg-LEO... I have had interesting discussions with Dr. Abdul Kalam, former President of India, and with the Policy Committee of the National Space Society.  We really need to study his
key launch ideas more seriously, throuigh more serious funded work... but it LOOKS AS IF his
"Avatar" vision has a good chance of working, and getting us to $50/kg-LEO, perhaps as soon as 10 years from now, **IF** the best US technology can be mobilized and brought to bear, for the
scramjet engines and the hull structure. That's one hope, and NSS is committed to making that hope as real as possible. More precisely, this month the NSS Policy Committee unanimously passed the motion:

This committee strongly supports the effort to build new collaboration between US and India and relevant US allies in the areas of affordable and clean energy for use in space and on Earth from space, and low cost access to space, within the larger context of widening the door to human settlement of space and a more prosperous and peaceful world.

But we need all the hope we can get. I was very excited last week to hear of a new design, like Chase's, but estimated by extremely competent folks to offer
$27/pound-LEO. "How?" As in Chase's design, it would be based on a rocket engine, a rocket engine -- but it would exploit economies of scale, to be able to lift million pound payloads. "What about the total vehicle weight?" I asked.
"Chase tells me he sized it to about 10 tons payload, because when the vehicle weight gets to be more than a million and a half pounds, kit's
hard to use airport style operations, because you crush the runway."
Their approach: water takeoff and landing, like the old Spruce Goose.
That HAS worked before...  and they had more detailed notions of vehicle structure, well-grounded in the old McAir Space Works (now owned by Boeing).

So I really hope one or both of these lines works out... it may indeed be a matter of life or death, yours and mine.. (at least if we hope to live for more than a couple of decades. I wouldn't want the whole species to die of old
age at about the same time I do...). But, you out there: how much do you care whether you live or die? What would you be willing to do to increase your chances?

Best of luck..


Sunday, May 19, 2013

when and how could humans build things as smart as a mouse?

Click here to see a new paper on this subject

I sometimes hear people say: "Don't worry... yes, the human race is on track to kill itself off, but
before that happens, we can build artificial (general) intelligence which will  carry on. We can even download ourselves to those computers and live forever." And I sometimes hear major
corporations say "We have already built the equivalent of a cat brain, in hardware.."

In all fairness, those folks are not at the real forefront of the neural network field, and don't
really know what kind of engineering it takes to make such visions real.  The paper above
was written this past week for an invitation-only workshop at Yale, aimed at the kind of people
who are most serious about being able to make real progress in building such systems.

But a warning -- I tried to give a very clear overview of the reality, and the schedule (about a century if people work harder and faster than they seem to be doing now, with funding support beyond what
I really see out there now).  As clear and as earthy as possible... bit what's clear to one person may not be easy for another.


What really is easy for different types of people?

At one time, back in the 1970s, I woke up one day to learn that I had actually become a tenure track faculty member in political science at a major university. (How could something like
that happen by accident? The story of my life... But, in brief, the department location was the accident.) And lots of people exhorted me: "Say it in plain English. Don't use equations.
Use the simplest plainest English you can."

Many years later, I came to understand how bad that advice was. People would go back to
some of those articles and say : "Why didn't you just say the same thing in equations? It would have been so much easier to see what you are really saying." Or flow charts.

In 2008, I had a paper published (and paid for open access) in the International Journal for
Theoretical Physics, on the subject of temporal physics, which I worked EXTREMELY hard to write in the most baisc possble English, using only a sprinkling of very crucial basic equations. A FEW people said "yes, this is very clear and very decisive," but it seems a lot of folks simply found it hard to understand. That was a bit surprising to me.

How could they have problems with something so simple and straightforward? Later,
I got some inkling of the problem, when a famous physicist (whom some regard as the
champion of backwards time physics, though he came to the subject much later) explained how
the metojd of backwards time physics allows one to predict experiments we could not predict before.
"It is exactly the same theory as standard quantum field gthoeyr, but it gives us different predictions."
In my paper, I had made some tacit assumptions, like that a theory gives predictions, and that systems which give different predictions are different theories. I guess that kind of simple idea was
so alien to lots of folks they found it hard to imagine what it is like to reason about
theory and experiment ... from... the viewpoint of the scientific method.
It reminds me of the joke one can sometimes hear at NSF: "The gap between theorists and experimental people has becme so great that the scientific method itself has become a
rare exercise in crossdisciplinary cooperation.  At least we do try to encourage such
cooperation.. sometimes."

I also remember giving a kind of flow  chart description of simple brains or intelligent systems
at a workshop led by eminent neuroscientist and psychologist Karl Pribram. I was really happy
when the dean commented, in his introduction to the workshop, "hey, here is a paper I can actually understand, which makes some kind of sense." (It is in one of the edited books by Karl Pribram, under Erlbaum and INNS.) But Karl later said to me: "Why is your model of the brain so complicated? Can't it be said in simpler terms?"

My response: "It really is simple... in the same way that general relativity is simple. But it does require some prerequisites.
    "One can wtte a simple description, or 'poem to the brain,' which is true and useful but does
not fully answer the question. By analogy, one can describe a factory or a robot in a useful simple way... but you can't actually build one, or understand how it really works, without knowing certain basic principles.'

So this paper is written to be as simple as possible, but for folks who are demanding about knowing what the basic principles really are...

And yet, it fits other things which I have posted on this blog, which are much less demanding, but express some of the same ideas. But since there were no equations, did you truly understand?


Monday, May 13, 2013

thoughts about time tracks and quantum computing

First --- this is not science. I know what science is, I work
very hard at it at times, and I respect it deeply. But there is room in life
for some parallel thoughts, which may be informed and enriched by science but are
not science at all, in any sense. Still, I will talk about some science here today.

Tonight, in meditation, a thought emanated from me:

"Universes are like rabbits. Once you have two of them, beware,
a billion more can't be far behind. It's hard enough to cope with
one little world of a billion people or so, let alone billions of parallel universes.."

It reminds me of the poet who stared at lots of little pebbles on the ground and thought "If just one of these pebbles should suddenly rise up from the ground, without being pushed or pulled, our whole universe of thought would be totally wrenched out of place." A similar thought...

There were reasons why I thought about multiple universes this morning, and thoughts came back to me about this stream of thought.

But first: this was not motivated by the strong, mainstream thought about parallel universes
and "multiverse" as that thought is expressed in today's quantum mechanics. Still, it's close enough that I should say something about the connection.

Lately, when I get deep into the theories of the universe that people actually use in a practical way,
to make predictions about advanced electronics and the technology of light, and to design new
systems, I focus on just three core theories of physics which have become popular: one is called
"Feynman path," one is called "canonical" and a third is called "cQED," which stands for cavity or circuit quantum electrodynamics.  One of these theories, the oldest version, canonical quantum mechanics, summarizes "the law of everything" as a single strange mathematical operator, H,
called "the normal form Hamiltonian." More and more, as I review proposals from different parts of engineering and physics, I tend to be convinced that very few people truly understand all three of these theories. They form a lot of beliefs about the three (such as the convenient belief that they are equivalent) based on what is socially convenient, based on the same kind of mass psychology effects
which also brought us beliefs in epicycles, superstrings, creation science and suicide bombing.
(To think that scientists once used the expression "Holy Cow!"....) Based on the humble criterion of what works and what fits empirical reality, I certainly take cQED more seriously than the other two.

But... this  morning I was reminded again that it's not just those three theories, or the radical revision I propose to them. WITHIN the world of canonical QED, there are still different views or interpretations of what's going on.  One of the most important views is called "The Many Worlds Interpretation of Quantum Mechanics," expounded in a classic book from Princeton University Press edited by DeWitt. According to that view, we do not live in a mere three dimensions of space. We live in a "multiverse" which is infinite dimensional. The whole vast three-dimensional universe of space which we see around us is not actually the whole cosmos; it is just one strand or thread or track WITHIN that larger infinite dimensional cosmos or multiverse. What's more, the strands all interact with each other according to the laws of quantum mechanics.

This is very much a mainstream theory of physics (unlike the heresy which I have
developed and promoted over time).  It is this thoery of physics which led to the
field of "quantum computing" or "quantum information science."

It is really sad, and tragic, and an example of behavior which seriously degrades our progress in science, that people do not give more credit to David Deutch of Oxford, the inventor of quantum computing. Deutch proved, decades ago, that a "universal quantum Turing machine" provides a whole new level of power in computing, in a very basic way, beyond what classical Turing machine computers are capable of. The whole idea was based on the many worlds version  of quantum theory.
One could say: "Neural networks tell us how to harness the power of thousands or millions of processors all working in parallel, which is inherently more powerful than what you can do with just one processor chip. Quantum computing extends that still further, by letting us harness the power of millions of universes all coordinated to work in parallel, to divide up a computing task."

So here are my zingers.

First, I do not believe that this orginal, first generation version of quantum computing
is the most powerful one. Almost everyone working on quantum computing in a serious way is taking that approach now, but they are encountering very fundamental problems in going very far with it.
One can accomplish MUCH more by shifting to a kind of second generation quantum computing, which exploits the symmetry between forwards time and backwards time. cQED is a modest step in that direction. It was exciting for me to learn of the paper by Blais (and the 900 people who cite it!)
showing how cQED can point the way to solving the most intractable problems people have been having lately in quantum computing, in a fundamental way. But we can go much further in the same direction by a full-fledged understanding and use of backwards time physics, as described in my open access article published in IJTP about 5 years ago.  A lot of initial work has been done, both on thoery side, and on the experiment side...


The first step in that new process, on the theoretical side, is to get back to a new formulation
of the "laws of everything" which does what Feyman path tried to do and does not succeed in doing: get us back to a viable formulation of physics as a theory over nice, symmetric four dimensional space time. No more multiverse. Just one vast space-time continuum.  Maybe with "dice" included, but I see no real need for them right now. The effects of chaos and such do seem to be quite enough to explain what seems like random phenomena to us. (Lots of people say that kind of thing, but here it is what we see directly in the math. It is shown.)

Even as we need to strive for a clearer and simpler understanding of how the universe of physics actually works...

Well, not having forever to live, I still think ahead on my own beyond that next step.

The great task of physics in the coming decades (or centuries?) is to get back to
three dimensions of space and one of time (or rather,, four-dimensional space-time). Back to reality.

But in actuality -- the IDEA of multiverse or multiple dimensions might well turn out to be right in the end, even if all the present mathematics and theories embodying it are understood to be completely wrong.  It may turn out to be like what they say in Voyage to Arcturus:
"The way out of this world of illusions is not by runing away, but going through it, all the way from here to the end, and out the other side."

Realistic time-symmetric physics  (the heresy I have fought for) seems to be essential to letting us build hardware which actually lets us TEST the idea that there is only one universe.

There is a wonderful science fiction story by Connie Willis, "Blackout' and "All Clear,"
which explores some of the key concepts about time. In a clear-thinking concrete way it raises the question: "Is it possible to change the past?" We cannot even do the experiment until we develop
the basic technology of backwards time telegraphs (sketched out in some of my posted papers).
Realistic time-symmetric physics points the way to how to do this. And then what?


Well, I like to do some experiments outside the physics lab, to try to get some feeling about how
things work.

This past year -- ironically, in the week when I was speaking at Singularity University, housed at NASA Ames -- I did one little experiment (if you can call it that) which altered my feelings about
the probability that it is possible to change the past. I am certainly not convinced as yet..
there are lots of grounds for being skeptical about that kind of thing... but bit by bit evidence
is beginning to accumulate in my mind that it may in fact be possible to  change the past.
(Oops: I hear some folks saying I should cite another sci fi, the saga of oversoul seven. 
OK, done.)

If that is true, it would immediately imply that the cosmos is more than just four dimensions.
It sounds at first like "hypertime" (a concept I published in Nuovo Cimento in 1973 or 1974,
which like many things I published, was "later reinvented," though it's so basic I wouldn't want to waste time on clearing up the histories). In the "hypertime" idea, the cosmos consists of the entire space-time continuum "as it now exists," at the present moment in hypertime, but it also includes the earlier versions of the space-time continuum as it existed before. The hypertime model is basically the simplest possible way to express the idea that we can change the past.

But is it so simple?

In 2009, I was intrigued by the last Star Trek movie, in which Spock in one universe, as an old man, communicates back in time to another universe, where there is a younger version of himself.
Actually, I thought a lot about that movie that year... and later wished I had taken it more seriously;
as the science officer of a certain office in the Senate, I should have worked harder to help the tough guy who ran the place. He really could have used more help, and the entire earth might be in better shape if I had pushed harder to give it to him.

And yet I can't help thinking... if I HAD done that, a lot of crucial real-world problems in energy and space and economy might be in much better shape today.. but I myself probably would NOT have learned the many really important things I have learned as a result of taking a different path.
I wondered: which is really the better path, in the end?

And then I remember the spirit of David Deutch's ideas, which remain valid even if we just chuck out the old Hamiltonian version of the dynamics of the multiverse.  Could it be that NO SINGLE
choice of path or thread or time track is best? Could it be that things really get worked out only
with parallel processing, with two universes benefitting from each other?


And that's what led me to react logically as I srtaed with here: once you admit two universes like that,
it suddenly implies a billion. I have enough troubles with a billion people at lonce, let alone a billion
universes. "Time for me to check out. No way can I handle anything like that!"

But then the voice of reason started to assert itself...

"Hey, think about neural networks, think about parallel processing, think about David Deutch."
"With neural networks, no one expects a single neuron to synapse with every other neuron in the entire brain.  You can even build decent CNN chips, perhaps, in which it is good enough for
neurons to connect with just a couple of their neighbors, if they do it well. But if they DON'T connect with their neighbors, at all, you don't have a brain, you have a frothing soup."

Oh God, do we really have to?

And what about those clearly visible time tracks in which all humans die an unspeakable death?

Whatever. Back to doing my job... paperwork to catch up on. And human conundrums which are already complex enough to stretch (or break?) my ability to get it right...

Best of luck,



By the way, for those interested in the three theories I mentioned...
my impression is that Feynmann path is largely equivalent to somehting which might be called "raw form canonical" as opposed to "normal canonical."

Here's how it works. Feynmann path and both types of canonical quantum field theory (QFT) begin by mapping a classical Lagrangian or Hamiltonian into the "corresponding quantum field theory."
In principle for canonical field theory, this is really just a kind of heuristic device for
trying to come up with an interesting version of the Hamiltonian operator, which needs to be tested empirically in any case.

In raw canonical QFT, we do the mapping by replacing each classical field, like phi or psi,
by the "corresponding field operator." And we map multiplication into ordinary matrix multiplication.
But in normal canonical QFT, as we map classical fields into different objects (field operators),
we also map classical multiplication into something called "the normal product," which is really fundamental to canonical field theory.

And so, for basic QED, some folks would look at raw canonical QED, note some weird terms which are mathematically intractable (assuming infinite energy density in raw empty space), wave their hands vigorously, and say "we all know that infinity minus infinity equals zero, so we can assume all this stuff cancels out." That's their analysis of the fuzzy sgtuff which is sort of equivalent to Feynmann path (to the edxtent that either is mathematically well-defined). The alternative, more traditional approach (see Mandl and Shaw), is just to START from the normal form Hamiltonian,
which does not contain the most egregious terms to start with. WHY assume that the egregious terms are there?

There are some very well-placed but ill-informed people who will tell you "but wait, we need those vacuum terms to explain basic stuff like the Lamb shift." But, sadly, lots of folks get confused by
the differences between vacuum energy, vacuum terms and vacuum polarization. ("Vacuum" is a popular term, as is the letter mu, which I have at times seen used for three different things in the
same equation.) Mandl and Shaw show perfectly clearly how we can replicate the Lamb shift and plenty of other things, using the normal form Hamiltonian.

There are other differences. For example, Rajaraman shows how Feymann path requires that we have to add "quantum corrections" to  the classical mass of a soliton in a bosonic field theory, to get the full path mass. But the "P" mapping (see Glauber, etc...) shows that the quantum mechanical mass is excatly the same as the classical mass in these cases, if mass is defined by the NORMAL FORM HAMILTONIAN, rather than the raw form.

Also, Feynmann path predicts certain very interesting transitions called instantons
beyond what canonical would predict. Bezryadin, in thorough empirical studies of superconducting nanowires, has shown definitively that those instantons are not there. As for cQED... it is basically just a minimal variation of normal canonical QED to accommodate empirical reality (very important classes of devices such as VCSELs).  Again, see my IJTP paper, 2008 as I recall.

Wednesday, May 1, 2013

fire and brimstone, green sky and Black Ocean

Today it is just one week since I returned from Singapore, ran the numbers for the ocean currents which breathe oxygen into the northern oceans of the world, and became quite upset about the problem.

The problem is clearly much worse than I thought before but I (or someone else?) needs to run a few more numbers to get a reasonable sense of how close we may be to fatal consequences, and what the rational response would be. There is of course a weird human instinct to put the head into the sand immediately, and use any one-in-a-thousand hope as an excuse for ignoring the problem altogether.
That places a special burden of responsibility on those few who have some idea of what it means to be rational, to juggle probabilities and uncertainties, and such.

Strictly speaking, my shock from a week ago does not DIRECTLY imply fire and brimstone or even green sky (Peter Ward's apt term for the H2S and radiation which caused the "PT" mass extinction of life about 250 million years ago). It implies black ocean -- a condition where a huge swath of the northern oceans develop a poisonous deep layer, similar to what we see in the Black Sea today.

If you WANT to be calm (ah, a place in the sand where you can bury your head..)...

do think of the Black Sea, a wonderful and beautiful vacation spot for many thousands of people.
I have been there myself, and seen people sunning themselves in bikinis on black gravely sand.
And I have even swum in its waters where they are reasonably deep, next to the craggy cliffs which the Tatar's called something like the "devil's for." Meanwhile, just about 50 meters below, is the greatest reservoir on the earth of a poison (H2S) as powerful as hydrogen cyanide.

People get used to being next to a huge reservoir of poison. People in Pompei got used to living next to a huge volcano. Until one day, it was a bad day, and they all were quickly killed. There are stories
of people in Africa who lived next to a lake which had a similar H2S zone; they were comfortable for centuries, but then a wind blew in a different direction or such, and the next morning there were dead
bodies to clear out for a distance larger than the radius of the lake. I have wondered why the people living next to the Black Sea don't pay a bit more attention to what lies near them; when last I checked, there was a President of Romania who was starting to focus real attention to the threat, but the sivoliki
decided he was too friendly to the EU and was rocking the boat too much, and had him removed and replaced by people restricted to more traditional pathways. I just hope we do not wake up some morning to read headlines of Russia being in a state of extreme anger and violence towards everyone on earth, when a wrong wind blows over Sevastapol in the night and kills all their people in the area.
And I hope we do not end up clearing out dead bodies for a distance equal to the radius of THAT lake,
when the sulfur line reaches the surface! (The barrier between the poison and the normal water has been rising quickly over the past few decades.)

So the well-defined curves of maximum density as a function of salinity, and the prevailing salinity
in the oceans, suggest that the oxygenating currents -- "the lungs of the Northern oceans" -- shut down when the water surface temperature gets to zero degrees C and the ice melts
(about the same time, so far as I can tell.) Current reports, based on actual data, suggest that might be only a decade or two away.  So we may all be "living next to the Black Sea." Black ocean.

That raises three questions: (1) when does the H2S (and methane) come up out of the Black ocean, to start poisoning us?; (2) what is the stream of consequences when it does, to poison in the atmosphere and to the stability of the stratospheric ozone layer (e.g. how soon does death by ultra sunburn occur?);
(3) what could we do to be prepared to change the outcome in case the worst starts to unroll quickly?
(Of course, we could take stronger action to deploy technologies to reduce the global warming, in principle, but certain political forces just wouldn't want to be deprived of their entertainment.)

This morning I located an impressive 300-page study of aerosol dynamics which begins to give me a fix on some aspects of question 2.  Like a lot of the current literature on ocean and atmosphere dynamics, it focuses on empirical data on present conditions, but these are an important clue.

Before that... I was really starting to worry, in my ignorance.  I re-fread Kump's paper on historical H2S emissions on the earth, reminding me of the hard empirical data from biomarkers showing at least 100 ppm (more I think, but maybe 100 ppm in early phases of mass extinction events) of H2S in the atnosphere in the past.  I had the impression, from a few years past, that Black Ocean would essentially push H2S into the atmosphere at a relatively slow rate (due to slow mixing between the huge new reservoir of poison and the atmosphere), such that it would only take a decade or two for the whole earth to smell like the worst cesspool but a thousand years or two to reach direct fatal levels. But what about the killer radiation, due to detsruction of stratospheric ozone? How fast would that be?

I remembered that H2S is a light molecukle, and that light molecules have a tendency to rise.
Because the mass of the stratosphere ozone is about 2 million times less than that of the atmosphere,
and because a molecule of H2S aggressively eats a molecule of ozone, I figured that 1 ppm of H2S at the surface might be enough to eat the whole ozone layer -- and literally fry us. We would get to 1 ppm much sooner than we get to poisonous levels -- so it seemed that this would be the first real killer.

But cheer up -- the aerosol book says that H2S does not even get to the stratosphere now.
Yes, it is a light molecule, but it interacts with OH- in the atmosphere, and with pollution,
so that it is "eaten" itself before it can get to the stratosphere. "Don't worry about the H2S.
It doesn't last long. It just turns into sulfuric acid, and So2 and SO4.) And I remembered what
Peter Ward said about all the sulfuric acid which also rained onto the earth at the time of the PT "green sky" event.

However -- the aerosol report also said that the SO2 and the SO4 themselves also do go up to the stratosphere, and have corrosive effects. Oh, well. We're not really off the hook after all.
Also -- after a certain amount of flux, ratios do change. Light molecules do rise.

By the way, the reason why the poison is rising steadily in the Black Sea ... is due in great part to nutrition, to stuff like agricultural runoff. There is plenty of that all over the Northern hemisphere these days. Zones of anoxia are already a problem, even with the huge influx of oxygen we have today to counter it. And there are the upwelling currents.

In short, it is just a matter of time. How much, we do not know, but I don't see much reason for complacency.

Since ozone depletion is the first plausible source of sudden death, I googled on "geoengineering ozone
hole" and such. Only found one hit, to a well-meaning Afro-American science student.
He proposed lifting up tons of oxygen all the way to the ozone layer, using a neat new superblimp DOD is working on right now. Maybe. But there might be other options, exploiting a bit more chemistry and even physics. It would be nice to know what we could do to save our lives, in case we have to in a hurry, sooner than we think.

Best of luck to us all...


Some further details in response to questions:

(1) Don't the equatorial solar heating and consequent equatorial-polar heat exchange and currents provide enough mixing to avoid such catastrophe?

(2) Don't coriolis currents, perhaps combined with the equatorial currents of (1), do likewise?

There are certainly other sources of current other than the convective currents which send oxygenated cold water down to the depths of the ocean near the poles. However, those other currents are basically pushing in the wrong two dimensions -- latitude and longitude -- and only incidentally to depth. In fact, those other currents have existed for at least a billion years ago or so, but the deep overturning currents powered by the events at the north and south poles are relatively recent -- somewhere between 3 and 22 million years ago.
Before that, he earth generally DID have stratified oceans, resulting in the dozen or so episodes of mass extinctions described in Ward's book. The coriolis effects and such were not enough to stop that. In fact, from Kump's account, they helped to make things MORE precarious, because while they did not prevent anoxia in the deep ocean, they did mix H2S up into the atmosphere. Powerful enough to cause enough upwelling (localized UPWARDS motion, not the downwards energy of convection) to slowly leak enough H2S to destroy the ozone layer and poison the earth.

 (3) Mightn't underwater topography in conjunction with the currents of (1) and (2) provide additional upwellings and mixings sufficient to do likewise?

And (4), and most intriguingly, if I understand correctly, the Earth had a single land-mass at the time of the PT junction, the supercontinent called Pangaea: Could that single-continent conformation have played a role, and perhaps the critical one, in the PT extinction?


Ward's book describes about a dozen mass extinctions. As I recall, the most recent was long after Pangaea broke up.
There was and is upwelling, in addition to the big downwards-convective oxygenating currents we have today ,
but they only made things worse.