In physics, as in AI, a combination of ego and curiosity and social pressure encourages communities to pretend they know more than they really do. I think back to the 1960's, when they showed me Minsky's "Mars robot," and told me of his promise to NASA to have human-level intelligence in a robot within 20 years or so of then. It's easy to make such promises when you don't even have a specific plan to execute, which requires time estimation for the stages. (Contrast that with my "hundred year" plan to get to the mouse level , presented at WCCI2014, and posted at arxiv, in computer science area. )
And so... we really need to straighten out the "realm of QED" (electric fields, magnetism and "point particles" which serve as sources and sinks of electric fields and magnetism) before we can get the other stuff right. In my view. Just as my WCCI paper offered four steps to the mouse level of intelligence, I now envision a three step possibility to straighten out, yes, the whole range covered by today's standard model pf physics: (1) more systematic application of time-symmetric physics to model quantum optics, and things which can be described by models like those used in quantum optics, and build the new devices which new understanding allows; (2) MQED, Markov QED, which extends that to the entire realm of things described by QED; (3) models based on topological solitons, with the special kinds of Higgs terms which predict topological solitons, which let us "see deeper than 3 femtometers and faster than 3 femtoseconds," to explain the existence of elementary particles without any need for regularization or renormalization in defining the models, based on a small-looking but far-reaching tweak to electroweak theory.
My recent papers on analog quantum computing, polarizers and such have all been focused on level one. Only recently have I understood the need and opportunity to do level two. Level three is more interesting to me (I too have some curiosity), and it is useful like the sight of a distant mountain peak in showing us the way forward.
I understand the political/social human need to work on level one for now, just as the politics is even more compelling to focus on "vector intelligence" (even lower than "convolutional networks!)
for the very nasty world of today's computer technology and would-be abusers. However, as the shape of MQED begins to become ever more clear in my mind, I am more drawn to that... and if the politics are hopeless anyway, why not push on to what **I** can understand that I did not before?
The community ALREADY uses many versions of QED, often using fuzziness and sleight of hand to avoid or even repress the embarrassing ambiguities and barnacles which occur even at the level of quantum optics, before electrons and nuclei and atoms are also modeled internal to the system.
Most prominent are KQED (Copenhagen QED, canonical QED as in texts like OLD Mandl, new Greiner, start of Weinberg) and FQED ("Feynmann path" QED), and CQED (circuit or cavity QED), all of which have variations, such as the many worlds version of KQED which inspired David Deutsch to develop the foundations of modern digital quantum computing. MQED would be yet another version, fully integrating the implications of the triphoton experiment which I proposed, whose outcome turned out to be too horrifying for the establishment to permit to be published.
(Yes, people can work on the complex social dimensions of that, but I have other things to do.)
MQED would be similar in a way to the "Quantum Trajectory Simulation" (QTS) of Howard Carmichael, described in volume II of his classic text on statistical quantum optics. It would model its realm as a kind of hybrid system, similar in spirit to the hybrid systems so familiar in control theory,
EXCEPT that we are still talking about a Markov random field across space-time, as in my elementary models published on Bell's Theorem experiments. The hybrid description is a combination of stochastic EVENTS at points in space-time, and in deterministic continuous PDE flows between those events. Predictions are based on predictions for the probability of SCENARIOS across space-time, where each event is a node, and where simple propagators convey information from one event to another. For practical experiments involving solid state objects in the system, we can use embedded propagators (like Schwinger's Nonequilibrium Green Functions, NEGF, or simplified versions of them) to simplify calculations.
A week or two, as I started to think about how to formalize and write up MQED, I first remembered what I learned earlier about the electron when working on stage 3.
To keep my cognitive map simple here, I even began to give names to the three stages.
As I wrote my paper on photons going through polarizers, in IJBC, I felt a lot like Alice chasing a white rabbit and ending up in a weird wonderland, still three-dimensional but a lot closer to David Deutsch's weird world than I expected. So I have files at home labeled "white rabbit" (and "other rabbit"), chasing photons. The white rabbit is always checking his watch, a fitting activity for a carrier of time symmetry! But then, reaching the electron (or ion) is like... following the rabbit to a conversation with the Mad Hatter (the most credible expert I know on applied QED?) sitting next to a March Hare -- better called a hedgehog.
I was so happy to realize I have been looking at a symbol of the hedgehog (a topological soliton of charge one) already for years, the little moldable plastic figure of Sonic the Hedgehog, on top of our coffee machine at home! For MQED the trick is not to get bogged down into all the details of three-stage modeling, but to exploit what we know about this fearsome little creature to come up with a workable model at the approximate level used in all forms of QED. The electron (and nucleus) as a perfect point particle -- good enough at the >3 femto level, even if not the full truth.
I was thinking... today the hedgehog, and later finally back to an audience with the scary Red Queen (nuclear force in all its awesome terrible potential, which there are many political reasons to be wary of).
But as I got a bit further... near the hedgehog..., in Alice... what about that cat? Whose cat is it anyway? And Tweedlee and tweedledum? OK, interference, the Schrodinger cat, the two entangled photons... all must be accounted for. My Bell papers already dealt with the twins, but I knew that a more rational political strategy (even with social support) would send me next to print time-symmetric models of the delayed choice quantum eraser experiment, Shih's Popper experiment, and methods used to produce triple entanglement by Zeilinger and by Shih.
I started worrying more about the cat. The essential difference between MQED and other QED is the use of probabilities rather than probability amplitudes to structure everything. I had no problems with ordinary, local interference, as with a photon interfering with itself, because of the wave-like propagation between events. (Basically, Maxwell's Laws with boundary conditions yield a nice familiar embedded propagator for light going through a double slit.) But what about GLOBAL interference, the kind of thing one gets only with global probability amplitudes for entire configurations?
Suddenly I worried -- could it be that MQED and CQED already disagree, in the core, beyond just the collapse of the wave function (an unnecessary barnacle which could be removed from CQED as an appendix can be removed form a human body), already in quantum optics?
And so, a week or two ago, I decided: "Time to stop chasing hedgehogs, and full MQED. Time to get back to stage one after all..."
And so, I have today completed a first pass through review of quantum delay eraser, Zeilinger's paper on how he constructed GHZ states, and important work by Scully leading up to a lot of this. (Really, just Scully and Druhl and Scully and Zubairy.) Not so bad. Both the eraser and Scully's earlier picture of two interfering two-level atoms are easily handled with the MQED approach, using the obvious traditional embedded propagators. The one new thing is that beamsplitters can be modeled as simple classical objects. Even the GHZ paper uses polarizers and beamsplitters (and one lambda/2 plate,
also unitary, like beamsplitter, unlike polarizer); to get three entangled photons, the key is to use a nonlinear crystal to generate TWO pairs of entangled photons, and just filter from there. In another world, I would write this all up immediately..
But as I am more on my own, gratifying my own curiosity without so much delay or leverage
of collaborations (sigh), I went back to asking: given that, what of the global versus local interference issue? (Footnote: no entanglement puzzle; the Bell stuff already took care of that.) Should I plan for an all-optical experiment to test THAT distinction?
My conclusion for now is that I need not try, because there is no inconsistency within the realm of quantum optics. All interesting interference/entanglement experiments basically go back to a source like nonlinear crystal or a single laser stimulating multiple atoms, which ultimately resolve into simple detectors at the end of the line. Integration over scenarios with propagators resolves itself into the same predictions as with probability amplitude calculations similarly resolved to ultimate detectors. The analysis is just like the old Von Neumann regression of observers-within-observers familiar to folks in foundations of quantum mechanics.
And so... yes, it would ALSO be nice to write that up more completely (about five such items "TBD")... but now I finally feel ready to go back to that hedgehog. The hedgehog is a very tricky creature, but at the level of MQED that may actually simplify things. No BBO here.
All events involve a photon/light, already familiar. NEGF propagators for electrons are already rather familiar. (Do I need to reread Keldysh at some point? And even Supriyo Datta a fourth time?)
The challenge may be more one of proof -- of convincing at least me -- rather than formalizing MQED as such.
Perhaps Scully's NONINTERFERENCE of light from three-level atoms stimulated by a common laser may also be a step towards "the hedgehog," towards incorporating things beyond optics proper
into experiments which require that. (Endogenizing the atom.) But... there are lots of ways to approach the hedgehog, and maybe it's too early to say more.
===============================
Next morning:
I put the basic calculations/analyses for the two-level atom interference and the delayed choice quantum eraser in a notebook, X2015. Hard to do equations in blog posts -- but these were very simple. As with the Bell experiments, one can model at varying levels of detail but the simplest fits the more complex and shows what is going on. Basically, the laser source, Sl, a complex number
whose phase is the only uncertainty that matters, goes through propagators to the two atoms or the two BBO sites. (The paper by Kim et al, PRL year 2000, doesn't mention that the slits in figure 2 are NOT interference producers; they are just a way to make sure that the laser light only gets to two regions on the nonlinear crystal, A and B.) The obvious A and B terms just get added together.
The scenarios X (in notation of my Bell MRF papers) are defined by detectors, and nothing weird happens. The "weirdness" of quantum eraser is really just a matter of picking out which scenarios match which... no real paradox, and an easy prediction in both cases.
As for the result on global interference... "OK, it's not all detectors." But polarizers can be modeled equivalently (for circuit/outcome purposes) as calcite polarizers, which are like birefringent
(unitary) operators followed by detectors on one side. Other objects with complicated master equations do need to be time-symmetrized, but that's all. Time symmetrization is certainly not inconsistent with the Schrodinger equation of many world quantum mechanics. Thus nothing in quantum optics proper allows to distinguish between a probabilistic theory like MQED versus that kind of quantum theory proper.
Moving on to the electron...
I have a paper on the F transform. A Next Task would simply be to show that it yields essentially the same predictions of atomic spectra as ordinary QED, with a bit of precision about how to handle self-energy for the Lamb shift -- all envisioned in my arxiv paper on extended Glauber-Sudarshan.
It basically resolves not familiar psi delta (propagator) psi terms. Given what happened to my previous really Herculean efforts to explain much simpler things -- my own curiosity is satisfied,
and the desire to write it up for hypothetical others (none of whom got through the quantum optics stage) is reduced. Even my X2015 bits are compromised by the state of my eyes, giving less visual feedback to maintain legibility.
Will I write up more of why the F transform gives the right spectra here? Maybe. Or will I jump ahead.. building on unpublished collaborative work of Schwinger and Pons, posted on my gdrive, to think about how to reduce feature size and create nuclear effects of a somewhat more pervasive and dangerous sort? Maybe not. Not a safe area for home experimentation, after all. Not that the mind is without risks either.
There are a few other smaller things to catch up with first in any case.
Friday, July 31, 2015
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment