Tuesday, October 28, 2014

new quantum physics and technology with some observations on political things



Context: a four-step roadmap…………………………………….

At this point, I have a four-step roadmap for new quantum physics and technology, analogous in a way to my four-step roadmap to get to Mouse Level Computational Intelligence (MLCI)  from WCCI2014 posted at arxiv.org. And, as with that roadmap, I can also see a bit beyond.

Step one: new model and technology for spin gates…………………

Step one --- not vector intelligence, but quantum polarizer modeling, use and technology. Both of my arxiv papers
On Bell’s Theorem, and my vixra paper, focus on this step one. The vixra paper is basically all but the appendix of my new IJBC paper scheduled for March 2015. The paper notes that today’s mainstream quantum computing and communication work is all driven by David Deutsch’s worked out vision for Turing-like universal computing with qubits, with entangled |01> and |1>. But my step one would extend that from digital -- |1> and |0> -- to the full power of analog quantum computing with spins, more general |q>. It cites two SPIE papers showing how analog quantum computing allows codes more unbreakable than the codes of digital quantum computing. To execute step one, and get the full benefit of analog quantum computing and communication, it is absolutely essential to have a correct model of the analog spin gate, the polarizer (with some slight extension) – and that is what I have developed in the new papers. The new triphoton experiment is essential to establishing the correct model, and, in the world we live in, will inevitably require some sorting out. Of course, other elements used in digital quantum computing also need to be carried forward to the new context, and many implementations… one could spend an entire lifetime (even if one starts at age 20) on just that one step. By analogy, it sobers me to realize how even vector intelligence has not been fully nailed down yet – yet it reassures me to remember how the Terminator movies were a factor in that, which should not apply here. Unbreakable codes and more powerful quantum computing should not terrify the world.

Actually, the image of the photon which must choose to align, to align 90 degrees opposite, or to down-select the whole situation somewhat retroactively, is also of value simply for human understanding – especially in the Bell experiment, where we see it is not just one photon.

Step two: free energy from heat…….

Sound crazy? Please look at my previous post of "solar cell which works in the dark" for basic explanation. Here is just go ahead to current status.

Step two: free energy from heat. My new IJBC paper does not mention those scary, heretical words – but the appendix does specify the two modeling details which open that door. On my obscure personal blog (drpauljohn…), I give an updated overview connecting that somewhat with previous work (and explaining why it is doable even though most people were educated to believe it is not). The key is that a new kind of power source, the symmetric high-density-of-states source WITHOUT nonzero net voltage, can empower the type of circuit I even tried to patent before, the quantum separator. Thus we know more completely precisely what we need from photonics to power such circuits, and can link to many active elements in photonics offering several ways to get to the first milestone, the shocking physical proof of principle. If I were still able to fund things from NSF, and empowered to concentrate on “QMHP” issues despite the smaller number of proposals compared to routine power hack work, I would tend to expect to see such a proof of principle within maybe 5 years. Things have advanced that far in the different aspects, and in my ability to integrate.  But as of July 14, I do not have the mandate I had before at NSF, and would not really expect such a free hand elsewhere either. That’s part of why I just posted this on the blog. It does require laboratory work to get to the working (replicable, NNIN-style) physical prototype, and after the great cutoff I shouldn’t expect that.

What would the larger impacts be of step two? As in the blog post, I would lead with “a solar cell which works in the dark.” Of course, it is premature to assume that the real numbers, after tuning of the technology, would be better or worse than the estimate from Kim and Trew. The physical proof of principle would already be a big enough deal in science, and then finding the most cost-effective implementation could be as complex as looking for the best battery or solar cell. In essence, the world really could use a new all-renewable baseload electricity source, totally distributed
(or at least easier to scale down than scale up), with side benefits to HVAC and cooling.  When it gets applied to cars,
oil folks might really be upset at its disruptive implications – but disrupting the Moslem Brotherhood,  Koch Industries and Halliburton would actually help remove the most urgent threats of extinction of the human species, even above and beyond the essential greenhouse benefits. However, the sheer political power of those forces (even here at NSF now) suggests it would be better to talk about the electric grid benefits.

On balance – it is totally benign to develop this totally benign technology, but realism suggests a need to do it on a back burner, as a scientific issue, not raising expectations of immediate deployment. Thus for world greenhouse issues, I would not want to get in the way of the four-fold emphasis in my present paper on renewables for ECCS: central solar farms with Stirling dishes, space solar power ala Mankins, pluggable cars and alcohol fuels. It is best to include step two in “Team B,” and not fight TOO hard the belief that this is a high risk kind of thing.   

But step two is actually more than just a new source of  electricity. While I would not say much more, for cultural and political reasons, it has two further technological implications (and cultural implications).  There is backwards time free energy, for communication and computing. Mouse level computational intelligence already would pose “Terminator” risks which I now take more seriously (especially after the severe “hint” of being put into a state of investigation the day I returned from WCCI, despite my having gotten State Department approval for a package including my abstract). If you hook up backwards time free energy with MLCI, it becomes a far more powerful system of the same family – maybe more dangerous, maybe safer in a spiritual sort of way. But I attribute those risks to the MLCI account, and do not assume that mere possession of step two gives people the ability to do that other stuff. That would be like refusing to put an electric socket in one’s house for fear that someone could plug in an electric weapon; the socket is OK, if that type of weapon doesn't even exist yet on earth, and ethical concerns should focus on the weapon, not the socket.  In sum, I would be happy to pursue step two quite openly though without excessive short-term promises, depending on what opportunities arise, as a new source of distributed ordinary electricity.

Step three: rewriting CQED ………………………..

Step three: move on to rewriting CQED (cavity or circuit QED). In many ways, this is what attracts me most at this time, in the physics area.  After I wrote my blog entry about step two, “taking care of that cleanup,” I began moving on to step three, for several reasons.

First, cleaning up CQED is a job for theorists like me – not a manner of funding laboratory type people, as step two now requires. It is something I can do in retirement, or in the office cut off from the world.

Second, some EAGER ideas have come floating into ECCS which build firmly on the based on CQED.  They are nontrivial ideas, and a really serious analysis would be very worthwhile for ECCS purposes as well as my own.

Third, the EMPIRICAL story of CQED has long been an important part of my thinking. It is true that nuclear exchange reactions (the kind which convert protons to neutrons when they have very slight glancing contact with neutrons) played a bigger role, years ago, in stimulating my discovery of backwards time physics long ago. Yet, by the time of my IJTP paper (2008 or 2009, also posted at arxiv), it was a major part of the story. The type of canonical QFT I was first taught in graduate school was very clear about the role of the HI term, psi-bar-A-gamma-psi, in determining when atoms emit light, for example. HI was hard wired into the most fundamental laws governing the universe. Thus I agreed with the folks who warned Bell Labs (Eli Yablonovich and his Chinese cowoerker who served a stint at ECCS) that the most fundamental laws of canonical QFT say it is impossible to prevent that kind of spontaneous emission. How could they?
But spontaneous emission was suppressed, empirically, on a huge scale, with enormous value. To me, this was obviously similar to the nuclear exchange reaction: just as protons somehow emit a virtual charged pion only when they “know” a neutron is coming to be able to absorb it, atoms emit photons only when  there is a place ready to absorb it. It is a causal effect from the future availability of an absorber   to an emission in the present, which is not at all a surprise when one learns to get rid of what Huw Price has called “that old double standard about time.” Yanhua Shih’s “Popper” experiment involves the same kind of paradox, and I would have studied it more years ago… if other things had not crowded the agenda.

Fourth – but then I learn about how this great empirical result, contradicting the canonical QED picture, is being modeled now. Carmichael’s text briefly mentions how it is modeled as an interaction of the atom with a vacuum field or reservoir – which flagged my attention about a year ago, though I didn’t read further what Carmichael had to say about that.

UPDATE: this was a PRELUDE to really looking intensely into an area I hadn't gone as deep into. The words above are all still valid... but at the end of this post I make some corrections to the part below.

Last Friday, I finally began a really serious read of Yamamoto’s book, Semiconductor Cavity QED, which I had bought years ago (but never had motive to get so deep into).  His chapter 1, divided into four sections, gives a mercifully compact review of all of CQED in the weak coupling regime, which was all anyone knew until maybe 5-10 years ago.
I read the prospectus for the later chapters, but did not study them, because chapter 1 already presented the key issues, and I did not (yet) have any need to get into the particular further developments, most of which involved trying to develop an exciton laser.  Yamamoto had been widely cited as “the” leader in CQED, so of course I used his book as point of departure. I noticed that he cites his prior longer reviews of CQED (focusing on the weak coupling regime) as the definitive sources for more complete detail there, but the two papers he cited were chapters in books I do not have (one edited by Eberly, Mandel and Wolf, an interesting combination). Maybe I will get them at some point…?

(ACTION: FOLLOW UP SOMEHOW.)

Yamamoto, right in sections 1.1 and 1.2, gives us a model of spontaneous emission (gross of reabsorption, discussed in 1.3) which MULTIPLIES the usual matrix element WITH “density of mode omega in the vacuum field.” In other words, he gives a formula which clearly embodies the coupling of the excited atom with the vacuum field. And that was just for step one, “the vacuum Rabi frequency,” which gest squared and multiplied by another term for the density of that mode in the vacuum. In 1.2, he is not so quantitative, but clearly describes how this gets modified for cavities where modes are enhanced, either by being verboten (to suppress spontaneous emission) or enhanced.

In many ways, the challenge here is to figure out another way to do this, without assuming that there really is a vacuum field there, which I very strongly doubt, for many reasons.

One reason: consider the energy of the vacuum field in any small region of space, V.  Yamamoto cites the usual formula from folks like Puthoff and Huerta, energy density as function of omega. But if you integrate over all omega, it implies an infinite energy density. Yes, he has ways to avoid doing that integration, similar in a way to regularization tricks used to deduce that infinity minus infinity equals whatever you want the prediction to be, but that is not so reassuring to me. Also I am well aware of the long history of folks who calculate Casimir predictions assuming such a term, and the empirical reality that the actual measured Casimir force between tow plates is ZERO after netting out the well-known vanderaals terms (Landua/Lipschitz). My discussions of KQFT versus FQFT also get into some of this.

Intuitively, then, I would say that the formula Yamamoto cites looks suspiciously like the P-*P+ convolution term in my new vixra/IJBC paper. The obvious model is to use that term, to interpret the square of the matrix element as P+, and to model P- as the effect of the external world or absorbers backwards in time to the emission decision. Clearly that does fit, in a beautiful way. But I need to get more mathematical in discussing it, and look to see whether any way to distinguish the models might be found. (One obvious way would be to introduce some dynamics into the experiment, so that the future is not identical to the present).     ACTION:    I need to go back to more detailed sources on CQED, and on how the supposed “vacuum field” is modeled mathematically in different circumstances, to link that to my alternative P- model. Yamamoto’s earlier papers may say more (this one basically just stated an intermediate key result) --   to work on Rabi oscillations (a major thread in ion or atom based varieties of QC, e.g. in Kimble’s lab, which I once toured), to Yamamoto’s earlier papers, to relevant papers by Eberly, and to another book I happened to have on had from Dutra.

Discussing this with Luda (yesterday?), I had the impression that this is very important to theory, but does not lead to new technologies like the other steps. But then I thought twice. Even first-generation quantum computing, based on Deutschian qubits, is basically getting nowhere now due to issues of decoherence or, more precisely, disentanglement.
(Back to O’Connell’s papers, which seem to underlie much of Eberly’s new analysis!) Quantum error correction and nondemolition operators don’t really seem to be on track right now to solve the problem enough to matter. But about a year ago I saw a paper with a more promising approach:  arXiv:cond-mat/0402216v1 [cond-mat.mes-hall] 7 Feb 2004, Cavity quantum electrodynamics for superconducting electrical circuits: an architecture for quantum computation, Alexandre Blais,1 Ren-Shou Huang,1, 2 Andreas Wallraff,1 S. M. Girvin,1 and R. J. Schoelkopf1. It seems that the CIRCUIT extension of CQED may be crucial even to first generation quantum computing – if it can be understood and modeled more correctly, without fictitious types of quantum phase transitions (which empirical work has put into question, as I have at times cited recently in questioning FQFT).

Step four…. Hacking the proton (CCME)…


4.1. Starting Points

Based on new thoughts this morning, and various developments in the world around us, I was thinking of doing a post on my VERY obscure blog on this really extreme topic. But I am just doing a journal entry. If I am wild enough to post this long, long journal entry… I doubt it will go so far any time soon. Who will even wade through all the complexity?

I do not want to push this topic too hard because it is much more than merely controversial. It also has rather extreme technological potential. Basically, any well-trained nuclear physicist knows why the complete conversion of ordinary matter to energy (CCME) is supposed to be impossible – and also knows how it would increase the strength of nuclear energy by a factor of something like a thousand and, more important, allow the use of ANY ordinary matter as fuel.
Imagine  the potential energy output of the earth (or moon) if it were one big hunk of U-235, multiplied by a thousand.
So this is about a LOT of energy, but also, of course, about the ability to build bombs which might well take out the entire earth, by several mechanisms. And other hazards. It presents a lot of moral dilemmas in what to say, to whom.
But now, perhaps I need to be a little less inhibited than before, for various reasons – not least of them the fact that a lot of people seem dedicated already to wiping out their own species by easier and more direct means (such as forcible inaction in the face of severe and urgent climate problems).  Still, I am inhibited enough not to engage in snappy marketing or optimal communications, as you will see.

Of course, step four is also much less certain than the earlier steps. I have ordered the steps by difficulty and by risk. This is the least definite and the most risky by far. I do not claim that I have worked everything out, or that the possibility of success is definite. But every year, it seems, more of the pieces do come together.

For me, the first piece came in the 1960’s, back when I learned what “baryon conservation” is and why it seems to rule out any possibility of CCME. That raised an obvious question: can we probe the limits of baryon conservation? If it is not 100 percent absolute in the underlying laws of physics, could we exploit that somehow? VERY interesting follow-ons, but a bit sensitive, so I will fast forward. (If I ever scan all the amazing stuff in my basement, some will pop up…) Sakharov in Russia began some major investigations of this topic.

4.2. Learning about Topological Solitons, Part 1: MRS

Fast forward … I wish more theoretical physicists knew about two really important books, The Skyrme Model (by Makhankov, Rybavov and Sanyuk, MRS) and Topological Solitons by Manton and Sutcliffe. I had taken a course in nuclear physics as such in Harvard graduate school, and lots more on theoretical physics covering nuclear stuff – and even my first graduate course in quantum theory was taught by Richard Wilson, a well-known nuclear guy. But despite all of that, when I started reading MRS in the mid-1990’s, I was very surprised at what I learned, in many many ways.

When I started reading MRS, nuclear technology was not on my mind at all that I can recall. (Or did I think this was at least some spinoff benefit of making the effort to understand that book?)  My motive was to learn more to help me answer the very basic question: why do particles exist at all in our universe? In the recent festschrift to Leon Chua, on CNN Memristors and Beyond (edited by Adamtzsky) , I have a chapter on the question “what is an electron?”, arguing that we should model the electron either as a stable vortex of force, or as a bound state of such vortices, and not as some kind of perfect Grecian particle. MRS explains a lot about what we know about such stable vortices of force, which they call “solitons.” Lots of folks in grand unification physics have been interested in trying to model elementary particles as bound states of magnetic monopoles, one type of soliton. (Erich Weinberg has a seminal paper on such stuff.)

My new paper in print at IJBC (see steps one and two) does NOT take the position that elementary particles must be solitons. The IJBC paper, especially, focuses on a new way to model quantum physics, the stochastic path approach, which is similar in many ways to the Feynman path approach. Like the Feynman path approach, it gives people several choices. One can model particles as perfect singularities, or as “ghosts,” or one can try to model them as solitons. Probably people got confused when I combined several new things all together in one paper. It is really important NOT to equate the stochastic path approach with solitons (or Feynman with ghosts?). However, in any important version of quantum theory – Feynman or canonical or stochastic – one has that choice, and I find the soliton choice much more plausible, for reasons beyond the scope of the IJBC paper. (The paper in the festschrift gets into the reasons.)

MRS made a very radical statement about solitons in their book. (Actually, they made many radical statements, but here I focus on just one of them. By the way. Makhankov was a leader, a manager, in Russia’s equivalent of Sandia or Livermore – a leading nuclear lab, with an understandable lack of reverence for our nonempirical theorists.)
 
 MRS assert that it is impossible for any normal physical theory to allow for solitons to exist, unless it is “topologically nontrivial.” They call this the “Generalized Hobart-Derrick” (GHD) theorem. But to be topologically interesting, they required really weird kinds of fields, not like anything we can justify directly in basic physics, and it seemed quite odd to me. The supposed proof was very far from rigorous.  I spoke to a professor at Maryland, Pego, who had an award-winning PhD student, Lev, who expressed interest in a PhD thesis to ascertain whether the GHD “theorem” was actually true or false, through a true proof or through counterexample.  I had met Lev through Adelphi Friends’ Meeting, where both of us were quite active at the time. Lev had previously won the all-Soviet Olympiad in mathematics. (One of about six winners.)

After initial discussions, it was my job to give Lev the two page statement and “proof” of the GHZ theorem by MRS.
I should have just Xeroxed those papers, and given them to him. I feel very bad that I didn’t.  At meeting, I said to him: “Here is the book. Could you please Xerox just those pages, and give the book back to me next week? PLEASE do not try to read the rest of the book. I promise you, if you are a good mathematician, the rest of the book would just drive you crazy. I know. It was a struggle for me to disentangle it, even though I had a lot of background in physics.”

Later that week (I forget how much later), I was informed that he had been hospitalized. I went to his apartment, and his rommates said: “He seemed normal and happy when he returned. But then he sat in that couch and started looking at that yellow book. He  looked like he was not going to open it, but he was curious, and couldn’t help himself. As he turned the pages, his eyes became fuzzier and fuzzier and wilder and wilder and we had to call the mental hospital.” After some time in the hospital…he was sent home to Siberia, where he went back to playing music in his home town,
with no more math or physics again.

Soon after I first met my wife, Ludmilla, she was still based in Moscow and in France, and she visited Rybakov and Sanyuk, to see whether they might have more than just the cryptic two pages. They had more pages, but not one bit closer to real proof.  Luda has some strengths of character ‘way beyond what Lev had,  but that’s not a subject for here.  It would be so tempting to say more about Protvino and the Golden Horde and stories from Dubna…

From MRS, I learned that there are actually two types of field theory which predict the existence of “topological solitons.’ There is the Skyrme kind of model, which involved “nontrivial” fields, like angles varying as a function of space and time. But there is also a Higgs kind of model, where there is a term in the energy function which locks in the field around a particle at a distance away from the particle, without locking it into the definition of the field itself. A year or so ago, I did a google scholar search on “skyrmion” and “BPS” ( a type of soliton based on a Higgs model), and learned that both have thousands of citations across physics.

At first, I was interested to try to find a third kind of stable vortex of force, without topology, like what MRS say is impossible.  Maybe it is possible, but it wasn’t easy. But then I realized: it is really amazing how electric charge is such a precise universal quantity, for so many radically different kinds of particles and compound states. It is easy to explain that either in a skyrme model or in a Higgs model, but it wouldn’t be easy without that core concept of topological charge. Also, the Higgs types of model do not require that the topology be hardwired into the fields themselves.

So I started asking myself:  how could we model the electron, proton and other elementary particles as Higgs types of soliton?

4.3. Manton and Sutcliffe and the Standard Model of Physics

As I was looking into that question, and making better use of google scholar, I encountered the seminal new book  Topological Solitons by Nicholas Manton of Cambridge University and coauthor Sutcliffe. I do not know whether Manton’s book would be easier to learn from than MRS for a reader new to the subject. It seems calmer, but sometimes a reader needs a shock or two to face up to the realities of a new subject. I do not know whether any of Manton’s readers have ended up in mental hospitals – but there are other kinds of straight jackets evident enough in a lot of physics today.

The first part of the book was an easy scan for me, since I knew most of it already, but then came chapter 11, the real zinger.  (Did he bury the wild stuff deep at the end for reasons similar to why I am doing something similar here?)  At the end of Manton’s web page, he clearly referred to CCME… in an ultratentative way, but enough to alert anyone really interested in the subject. Chapter 11 said more. It briefly described some of the ongoing work in Russia, and its relation to the standard model of physics.

Some of you may remember some loud but cryptic remarks form Putin a few years back, when he said that Russia was actively developing a new form of nuclear power which would make everything in the West look tiny by comparison.  There was a lot of speculation by US folks at the time (some sounding more like Yeltsin than Putin) – but it seems to me that this is the likely explanation, the only thing big enough to fit the strong things he said.  Manton was a big cautious, but he pointed to the lack of perfect conservation of baryon number in the electroweak model (EWT, half of the standard model of physics) and to folks who seem to be saying “the one thing we need to make this a real technology is a way to harness coherence effects somehow to organize it..”.

EWT also has a Higgs field in it, a very important part of the model, not well known but not just a superstring fantasy without any empirical basis at all. Manton showed that the Higgs field in EWT is not able to explain or support topological solitons, but it is close.

An obvious question came to my mind: what is the SMALLEST change we could make to the standard model of physics, leading to the simplest workable model, able to predict topological solitons with properties that fit the electron and the proton? That is a very important question in my view, which needs to be explored in an open-minded way, open to many possible versions.

Why should we look beyond the standard model we already have? First, the scientific method demands that we try to develop competing plausible models, and let empirical data discriminate between them, not religious style orthodoxy. That should be obvious to anyone trained as a scientist, yet in today’s world other forces seem to have caused many people to forget that. Second, in order to explain elementary particles as whirlpools of force, and have some hope of predicting their spectrum of mass and lifetime, we need to make at least SOME changes to the model. And third, Occam’s Razor – another key part of the scientific method – encourages us to look for possible simplifications, and such possibilities are present here.

4.4. Soliton Alternatives to the Standard Model

OK, having already shocked most people enough that they wouldn’t get this far, I will follow logic even further beyond the dogmas of today.

As I get deeper into this issue, the real problem lies in modeling the proton (or neutron), not the electron. It lies in what to do with Quantum Chromodynamics (QCD), the OTHER half of today’s standard model. QCD is actually the second version of Gellman’s “quark model,” modeling the proton and nucleon as compound particles made up of three “quarks” bound together by gluons (glue-ons).

And so – if you accept the new stochastic path physics, you can still model most elementary particles as ghosts, as in Feynman path models. You don’t have to go on to try to model them as solitons… but that is the path I prefer.  And then, if you prefer to model them as solitons, you can model quarks themselves as topological solitons, with several TYPES of conserved charge, enough to replicate QCD exactly in the limit as the radius of the soliton is small enough. (That would be a lot more serious as a mathematical limit than the idea that “the skyrme model is just the limit of QCD, in the limit as three equals infinity,” which MRS discuss, with a bit of humor.)

And yet,  do we really need so many topological charges? Are we sure?  Why not explore the possibility of simplifying, by reducing it perhaps to just two topological charges, electric and magnetic? That’s the next step in my advance towards heresy (and reality).

Following Occam’s Razor I did put in some time checking whether one could go even further, and try to model the electron with just one topological charge (one simple scalar Higgs field), but that didn’t work. Others had also tried that and failed. In any case, one would have to do more to model the proton anyway.

4.5. Getting Down and Dirty – the Empirical Record and a Wall of Tabu

Then it starts to get weirder and weirder.  Though I was fully aware of QCD long ago, I was equally aware of the “Magnetic Model of Matter” of Julian Schwinger, whose most advanced courses I took in graduate school at Harvard. For as long as I knew him, he continued to believe (as in his classic paper in Science) that protons and neutrons could be modeled as bound states of “dyons,” of particles of electric and magnetic charge.  As I started asking myself about whether we could get down to just two topological charges, I immediately asked: “What does the empirical data say?
What has been done to do true discriminating experiments, to compare Schwinger versus QCD?” Since Schwinger was one of the three people who won the Nobel Prize for discovering quantum field theory in the first place, it sounded hard to believe that people would disregard the scientific method to such an extreme as simply to ignore his theory altogether.

But that is basically what I found. I looked very hard through all the papers which begin to address this choice. There was no really definitive evidence; there was a lot of irrelevant hot air (e.g. praise for QCD with no comparisons); and there were several highly suggestive experiments, which favored Schwinger over QCD. I noted that the most relevant work came from a leading Japanese scientist, Sawada, whom I was delighted to make contact with for about a year.
I took his inputs and drafted a joint paper, and decided at least to submit to arxiv.

Then I was stunned by what happened. I did not understand precisely what was going on here. It turns out that Sawada, as respected and serious as he was, also wrote some papers at some point on cold fusion. That was enough to get him blacklisted in some places… and me with him as soon as I became a coauthor.  The paper did NOT argue “Schwinger was right and QCD is wrong;” rather, it followed the scientific method, reviewing the past work, and arguing above all that people should perform the definitive types of experiments which Sawada proposed. These would cost much less money, for example, than building the Large Hadron Collider did, and the information value would be immense, regardless of which  theory would be confirmed – Schwinger or QCD – or even if both were disconfirmed. In the end, I did post it on vixra as best I recall, and had it published in Russian, where the empirical issues are of greater interest to many.

4.6.  The Saga of Cold Fusion (aka Low Energy Nuclear Reactions, LENR)

So now, to go further, I need to make a partial confession. More than half my life, I have been struggling hard to pursue the truth, to understand what is really going on as deeply as possible, without regard to social inhibitions and tabus. But for another half my life, I have been thinking hard about another question: “What can we do to minimize the probability that the human species goes extinct?”  I try to be realistic about both of these.

I still remember the strange morning when I was in bed in College Park and the alarm radio came on with National Public Radio, broadcasting the announcement of Pons and Fleischmann that they had discovered cold fusion. I listened to them say “It is so easy any well-equipped high school lab could do it.” And to the report of astronomical neutron emission counts in one of the experiments, and maybe something about small local explosions.

My immediate reaction was “Holy shit! What do these people think they are doing?” Knowing something about the ways of Washington, and having friends in Cambridge Massachusetts, I immediately got a message to Markey’s office about the significance of this for nuclear proliferation. “Do we really want to create a world where any dissident high school student could blow up the entire city he lives in?” (Today I might extend this: can we imagine a world where every metropolitan area which has experienced a single school or terrorist shooting would now have the whole city blown off the map in a radioactive fireball?) A little later, I mentioned my concern to some folks in the intelligence community. One said: “OK, a guy will be contacting you. Here is the protocol…”

I remember some aspects of that meeting quite well. The guy asked: “OK, so should we just shoot these guys?”
I shivered and said “No, no that’s not it. They would just become martyrs. No, there are some folks I know about in the scientific community, who would be happy to work with you on this, and take funding to press the case. Perhaps by the time they were done with them, they would WISH they had been shot instead, but that would be much better.” I am not wildly proud about this, but I wouldn’t be proud about millions of unnecessary deaths in US cities either. By the way, as I talked with him, a group of guys wandered by in the woods, and called out loudly to us, “Hey Paul, we came to save you from the CIA. Don’t you know who that guy is?” Obviously I wouldn’t name names of any of these folks. I do not even remember whether this was before or after another major development.

 Another major development – I joined NSF “permanently” in 1989, to run two research programs; (1) neuroengineering; and (2) emerging technologies initiation (ETI). Early on, we were reorganized into the Electrical and Communication Systems (ECS) Division of NSF, run by Dr. Frank Huband. The Electric Power Research Institute (EPRI) had an onsite representative at NSF, and they were deeply into cold fusion, even funding some major work, taking it quite seriously after their due diligence. We agreed to hold a joint NSF-EPRI workshop on cold fusion, under Paul Chu of Houston and John Appleby of Texas A&M, held in Washington DC, under ne, under Huband.
This was about the same time as a huge widely publicized blue ribbon conference on the subject led by DOE in Santa Fe.

I was a bit shocked soon after, when ostensibly reputable groups argued in the SAME piece that: (1) NSF shamelessly publicized its workshop far and wide beforehand, attracting press that should not have come; AND (2) the workshop was held secretly in the dead of night. When folks assert both things at the same time, it should be clear to anyone that this was not entirely honest on their part. But we did not complain, for reasons I will get into.

By the middle of the workshop, it became reasonably clear that there were three types of cold fusion under discussion, all quite real. (Some folks had had problems replicating the Appleby version, when they flagrantly ignored some of the parameters Appleby stressed as essential.) There was a “Jones type,” intellectually significant but not relevant to energy. There was a “Pons/Fleischman/Schwinger” type, a lot like what I had heard about on the radio, and scary. But I became more and more hopeful that we might do something with the Appleby/Bockris version, which seemed to produce the energy we could use without the nasty neutrons.

One of the speakers was Ed Teller, who agreed it was real, but asked for a private meeting of just me and him and Huband, in Huband’s office. We hadn’t told him our impressions, but he smiled and said, “I bet I can guess what you guys are thinking. There are three versions here, one which you would leave to the pure physicists, one to us in the security world, and one safe for you to pursue. Well, let me tell you exactly how to weaponize the Appleby version.”
Oops. But we didn’t think of that, and we’re not so stupid. “No, but you weren’t asking the question I was asking about how to weaponize things. I promise you there are a lot of other folks who would start with that same question, and wouldn’t miss something like this.” And so we agreed we would not defend ourselves at all in public (though we did send out a memo or two internal to NSF dispelling the most blatant falsehoods some had asserted within NSF itself), would do whatever we could (within reasonable bounds of honesty) to discourage the subject, not fund it, and leave it to the security community both to study and to hold very close (and discourage).

So maybe the episode with Sawada was my just karma for all that?

But things do change. I have wondered at times whether key folks have simply forgotten the history.

I remember a group job interview at NSF where a guy from one of the national labs said, “And now I want to tell you about an amazing technology that would be so helpful for our energy problems, which our silly lab director insists on holding as highly classified… it is… but you guys won’t tell, will you?” It was cold fusion. I also remember the guy heer from a foreign country who then spoke up: “Well, maybe it WAS a US secret, but I promise you it’s not a secret any more when anyone form my country has heard about it.” (And yes, they had reports.)

Many smart people I know tried to pull this out of obscurity, sensing an opportunity. I remember one of them, to whom I said: “hey, I can see that glimmer of excitement in your eyes, straight out of Goethe’s Faust. But would you really want to be the great hero responsible for the deaths of every human on earth, because that is what this can come down to if it goes far enough? What kind of hero would you be after the first death of millions of people?” Later, when Bushnell of NASA pushed this harder, he challenged me to post exactly what Teller said, to an international discussion list. Roughly: “It’s not enough to tell just me, as a government employee. If you don’t tell everyone, we are justified to just go ahead, gangbusters.” So now US News and World Report has published on it (with the new alias Low-Energy Nuclear Reactions, changing nothing), and even Griffiths (of Ares rocket fame) has come out. So I guess that’s yet another reason why I need to regroup. (First was where I started, folks on course to getting us all killed anyway by another path – justifying taking risks which otherwise would not compute.)

And then there is a third reason – the immense long-term importance of CCME and related technology, to strengthen humanity for a future when we may need deeper understanding and technology to go further than we need to go today. If we lose the knowledge now, we may not have it in the future when we need it.  I’m not saying people should be putting this in the New York Times or teaching it in schools, but this diary/blog is rather far from any of that right now.

So next let me go a step beyond, from today’s technology to Schwinger stuff and CCME, stuff which even the Russians may not know quite yet.

4.7.  Foundations: From Schwinger, Cold Fusion, and NGEF to CCME

 In the middle of the NSF-EPRI workshop on cold fusion, held in a hotel at George Washington circle, a massive earthquake hit San Francisco, the biggest earthquake ever since, and much bigger than any for decades before. It hit EPRI (and the homes of many working for EPRI). As a result, we had a bit of a recess, as the EPRI folks and most others went to the big hotel bar, to look at the TV and see what was happening to their homes and their office. That gave me a chance to have a very quiet one-on-one conversation with Pons at bar.

And yes, Pons knew what was going on. He smiled and said, “Yes, we’ve been in contact with Schwinger on this. And we’ve had a lot of laughs watching those pathetic chemists try to understand a problem in physics. You don’t have to worry about them ever catching on or doing anything really dangerous. But yes, I do fully appreciate now, especially after talking to Schwinger, just how dangerous our version could be, and how important it is not to disseminate it as widely as we initially intended.  Basically, all it really takes to get our kinds of of effects is a basic understanding of how electromagnetic  impulses can interact with matter.” (As I type this, I am reminded of the VERY extensive work of T.J. Tarn on exactly that general subject, probably a lot more optimized than any of this, used in quantum computing. Tarn runs the quantum technology center at Tsinghua, but was previously funded by ECCS/NSF).

I also remember a session in a smallish room where a big stakeholder type guy spoke, and said, “Of course this is theoretically impossible, whatever any senile old folks might imagine. Let me show you why.” And he proceeded to do a first order perturbation calculation. And Schwinger himself got up and said: “before you start asserting that something is theoretically impossible, first you should consider WHO DEVELOPED that theory you are relying on. And maybe you should understand that in the nuclear domain, first order perturbation theory is not quite as reliable as you imagine it is, especially in condensed matter.” (I wonder whether he, like the later Pons, was TRYING to hold back…)

Later I learned that his reference to condensed matter was not so casual. Schwinger was a pioneer in many things, not only theory, but practical things like waveguides. Nonequilibrium Green’s Functions (NGEF) are basically the main gold standard for calculations in device electronics today – and if you tree back, you will see that they date back to important seminal work by Schwinger developing exactly that subject. Combine NGEF insights, and nuclear insights… of course he saw things that others folks might not.

And so, what does this do for fusion? We know that the protons of deuterium and tritium (for example) repel each other at large distances, but that they attract each other a huge amount when they get closer – causing an enormous release of energy if you can get them to explore the possibility of being closer. Since the repulsion is an electromagnetic effect, it is no wonder that the right kind of focused and coherent electromagnetic energy in a solid state matrix can overcome it.

But then… if Schwinger’s general picture is correct, and the dyons which make up the proton and neutron also interact via electromagnetism… but have neutral overall magnetic charge… then topology says that the knot could be untied, if the dyons could be brought closer, in much the same way.  Crudely.. it basically involves the same kind of stuff as Pons/Schwinger cold fusion, but scaled down a few more orders of magnitude. Folks in electronics are already working hard to scale down ordinary coherent light by orders of magnitude, for mundane applications such as on-chip communication; lots of new types of lasers are coming online, whether excitonic or plasmonic or polaritonic,
Is it enough orders of magnitude? Maybe.

Of course, none of this could work if baryon number is conserved absolutely, but I tend to doubt that (and perhaps some Russians may know more). Lots of things… beyond Occam’s Razor, there is also the balance of matter and antimatter in the universe,. And some other circumstantial things in my notes. And the stuff in the paper with Sawada.

On the other hand, can we get enough orders of magnitude in the coherent source(s)? (I say plural, as I remember work by John Perkins of Livermore on coordinated beams for fusion.) Maybe. And it is not trivial to design and model the exact geometry.

Two or three kinds of activity could help us get to this option.

One is theoretical work. In Russia, and in some other places like Adamatzky and vixra, I did publish a model field theory, as close as possible to EWT, which I felt could do the job. No need for QCD; just an extension of the Higgs model to generate solitons with  two topological charges, so that the modified EWT can do the work by itself. (You can imagine how open minded the folks are who have dedicated their lives to QCD!) But I have studied those options further, and now believe that a slight variation is necessary, which I have yet to publish. It is in my gdrive, in a  paper uypgrade_v12, coauthored by Luda, which she says can only be published when she approves the status of the journal.
Maybe after steps one and two we might be ready for that; maybe not. In any case, PDE simulations and explicit numerical analysis, in the spirit of Manton and Sutcliffe, would be essential.  The key point here is that these PDE do require simulation of three times as many solitons, and some multiscale methods, but in return offer a hope of exact simulation of small nuclei (e.g. those in Livermore’s experiments), which direct skyrme models of the proton cannot  ever hope to (because of the issue of high energy collisions proving structure).

Just as important is experiment, as in the joint paper with Sawada.

But a bleak thought occurs to me. Since CCME works on ANY protons or neutrons, new types of high powered coherent beams might possibly surprise us, in an unpleasant kind of way. I really wish the world would start getting serous about low-cost access to earth orbit (for which low risk technology options certainly exist). And I wish that some of the bolder experiments might be done in orbit rather than on the surface of the earth.

Beyond step four, there are interesting things people might try to do with gravity, for example, and other ways to probe beyond the assumption that we all live in Minkowski space. Bend space to maybe even reach the stars? If that is possible, I suspect it would require enormous energy densities, somehow, somewhere…  and so step four might be a prerequisite.


I meant to send this out as an attachment yesterday, but forgot. At home (not official NSF email) I found new orders,
to drop everything and focus on the internet of things. Various folks here were interested in the step three, CQED, kind of stuff, but in the new world (where NSF engineering is more like DOE and less like a university than before) priorities do trickle down from above. So indeed all of the above is like cleaning out old boxes, and send microfilms to the archives, as part of a liquidation exercise. I do not take it personally, since many other things and people are also being liquidated at this time.  We were talking this morning about curious parallels in changes in Congress (most visible on the Republican side), at NSF, at IBM, and one other important place… though DOE and NASA have already gone through many changes through the years, and DOD is a larger, more complex system. The internet of things certainly is an important topic, but limited when so many security issues come into it.



======================
=====================

Revisit of CQED:

I have pored through about 30 sources on various aspects of CQED since the above.

For Cavity QED, the original CQED, my next step was to study Dutra's book, and branch out n many ways form there.

In essence, there are just two parts to it. For spontaneous emission, they do assume a vacuum field,
and my P+P- convolution yields the same prediction WITHOUT a vacuum field. That's really simple.
In a way, that tells us that the Planck's constant we know is basically a property of the vast boundary conditions we see on earth, the existence of potential photon absorbers in all directions in free space. Was Dirac then right that some things we think of as universal constants are actually boundary conditions? Actually, today's CQED model says the same; the "vacuum field" in the imperfect vacum of our solar system might be epsilon different, in principle, from that somewhere else in vastly different regions of the galaxy. Maybe it could even be tested somewhow in astronomy. But the key point is that today's CQED and my alternative basically predict the same thing here: the probability of spontaneous emission equals p+ convolved with P-; for them, P+ is the vacuum field, and for me the back propagating impact of all the absorbers in the greater universe, but in equilibrium they predict the same.   What of fast action? As I type... what of a cavity defined by Kerr cells, mirrors which get turned on and off very quickly? Like CQED, I do predict changes in spontaneous emission under such conditions, but I would have different predictions as a function of the time cycle. However,
it would require really fast switching... might be worth thinking about.

More seriously... the ACTUAL CQED used in practical calculations essentially has nothing to do with that picture of spontaneous emission. There is no background universe field, and there is no Feynman/Casimir equivalent of it either! The bulk of it, and perhaps all of the "strong regime" stuff,
is the three step calculation I already saw long ago in Carmichael. Start from a simple QM model
of a two-level atom and photon modes in a box, all a model of atom and box, no vacuum field here! And then couple to dissipation, period. MAYBE a couple of ways to model dissipation (not anything broader than what Carmichael cites) like bulk dissipation (!!!) in the oft-used Gardiner-Collett model,
and maybe some absorption at walls.  No universe; just the same old solid reservoir of the dissipative solid object.  Dutra was an interesting window into an important minority other modeling approach:
"modes of the universe," which DOES imply a window to the larger universe, and is more rigorous in principle. I spent a bit of time on that -- but it seems as if the few practical applications merely translate it into different photon modes inside the cavity, and model tunnelling out of it in a way quite similar in practice to the  more usual three-step approach. (** more below) Dutra cited some empirical work by Seigman which could be important n establishing some empirical support for the open modes modeling versus the three-step, but it isn't my core goal here (much as it has its place, and vfits in eventually).

I AM DISAPPOINTED THAT THE PAPER OF BLAIS is nothing like what I had heard from NSF sources.  It is just one more standard attempted embodiment of standard Deutsch QC. There was a 2010 Nature paper by about 20 authors, like Latta?) of Stanford... and Yamamoto... giving a very important update. But at the end of the day, I don't see how better modeling of CQED (cavity or circuit) would solve the deep problems of decoherence or disentanglement . PERHAPS my level one ideas might help with quantum nondemolition measurement modeling.. but on the whole...

I see no way to "break codes" or do much else, until and unless level 2 is materially accomplished. And, given the New Order and its limitations, I don't see that happening soon. Koch's minions wouldn't let me fund the key experimentalists any more, and that's it. Finis.

As I type this, the Republicans captured 7 senate seats. Before Citizens United and DOE-ification of NSF Engineering, I might have viewed the news differently.  Now... well, I was planning to retire February 15, and that looks ever more likely. Will keep trying to look for constructive paths... and
pay more attention to the spiritual side of life.

==========

** Re CQED test for stochastic path versus Feynman path: there is an obvious and simple prediction.
If an excited atom is in a cavity, where the walls are Kerr cells rather than mirrors, I predict an increase in stimulated emission BEFORE the new "vacuum field" equilibrium is reached in the cavity -- and even before the Kerr cells become transparent! HOWEVER: the time of anticipation equals the time of motion at speed of light from atom to cavity wall; that's vey small compared to today's Kerr cells. Compex circuits can multiply this... but in the, it can all be done more easily in electronics than in photonics today, unless one uses totally different components. Level two experiments would seem easier... and as a theorist..  I will not be doing much with CQED for now. Too much else higher priority.

No comments:

Post a Comment