At http://www.werbos.com/physics.htm, I have posted a basic explanation of new experiments which in my view show that we need to adjust phyiscs, to treat time as "just another dimension," more than we now do in the patchwork of models and heuristics now used in physics. Today I have had some new insight on the key question: what is the minimal possible change needed to today's physics, in order to accommodate this kind of time-symmetry and to accommodate rigorous probability theory?
Back when I went to graduate school at Harvard, everyone I met said that "Electricity and magnetism and their interactions are a done deal. There is nothing really fundamental to learn from them. We have a complete, precise and rigorous theory, Quantum Electrodynamics (QED), which explains everything." Oh how I wish I knew than what I know know! "Applied QED" is the basis of the electronics and photonics and wireless industries, and more; you could even say that QED is the most completely tested theory of physics ever in history. But for many years, it was my job at the National Science Foundation to create panels of the leaders in applied QED (for electronics, photonics, etc), and lead discussions to evaluate what we know and what research is needed. We really don't know everything. There are actually several different VARIATIONS of QED (different theories of how the universe works!) which are crucial to practical empirical results. There are lots of new experiments which need to be performed to get it straight which version is right.
In my view, none of the versions well-known today is right, and we need new experiments to really nail down the true story. All that high-powered theory about nuclear and gravity stuff can only be nailed down precisely AFTER we at least get QED straightened out.
A crucial experiment was performed just this past year - the continuous triphoton experiment, a remarkably simple experimen; the figure describing it has been posted in several of my papers at arxiv.org from this past year. Perhaps I should scan and post the two key figures of experimental results given to me in April of this year, and discuss the technicalities of the work. Or perhaps I should say nothing until the experimenters themselves decide how to break the news. Whatever.
As of today, I am interested in a simple alternative version of QED, which I will call MQED0,
defined as possible (with some choices available). The "M" in "MQED" stands for "Markov"; this is a class of theory which extends and explains the lumpier "MRF" models I describe in my recent papers at arxiv and elsewhere.
MQED0 is an instance of a type of theory which may be called "stochastic realism."
In stochastic realism, there are many possible scenarios or paths X for the entire history of the space-time-continuum. (So far, this is like the "Feynman path" approach.) The laws of the universe are specified by specifying the function Pr(x), the probability of a scenario actually happening.
(My simple MRF models were also based on scenarios X, but for a fundamental theory of quantum electrodynamics, we want to consider the underlying scenarios.)
For MQED0, a scenario consists of two pieces of information: (1) the wave function PSI(t) or the density matrix rho(t) across all times t, for the electromagnetic field A and for particle fields psi;
(2) a "list" of the "discrete events" -- the location of each event in space time, and attributes of the event. As with ordinary QED, we treat time in a special way but treat it so that things like Lorentz transforms don't change the results. (I prefer more transparently relativistic versions, but for MQED0 we are looking for the minimal change to today's canonical QED, still the form most common in electronics and photonics.) At times between events, we assume that rho(t) or PSI(t) unrolls in a deterministic way, exactly following Maxwell's Laws (for A) and the usual Dirac dynamics (for psi, smal psi, the field of electron or other charged particle), linear dynamics with no interaction term.
To formalize MED0, we do things in two steps -- first we formulate the usual time-forwards version, and then we symmetrize.
For the rime-forwards version, the attributes of an event at any location (x,t) in space time consist of an event TYPE and a vector of TRANSFER, In ordinary QED, the events described by the interaction Hamiltonian density (e psi-bar gamma A psi) get decomposed into just a few basic possibilities like "creat photon, absorb electron, reemit electron." There is also a vector defining how much momentum and energy (and also angular momentum) get transferred, on net, to the photon field. A full specification of time-forwards MQED0 (MQED0-sub-+) consists of specification of what an event DOES to the wave function PSI or density matrix rho of the universe, and the probability of that event actually happening as a function of the attributes and the incoming wave function.
The symmetrized version would simply double the number of possible events, by allowing for the same events to occur in reverse time, and require specification of "p*" rather than a probability of outgoing as a function of incoming. This "p*" would be simply the endogenous time-symmetric probability, as in my earlier simpler MRF and CMRF models.
And that's basically it.
No, the work is not all done yet. It will never ALL be done, for this or any other variation of QED.
The description above leaves room for several variations, which need to be tested for mathematical consistency and fit to experiment. Ultimately, ONLY experiment can say which of the possible variations is true. It is a strange and silly exercise to insist on choosing based on pure reason alone, prior to experiment.
The most obvious variation is to use density matrices rho(t) rather than wave functions,
to assume that photon creation etc. do to the density matrix what we normally expect it to,
and to assume an event probability like (HI)**k, where k is probably 2 but we can try to work out the math more generally to be sure. That's already enough to nail down a basic variant, though the momentum and energy transfer are important, as in ordinary varieties of QED.
What is special about this variant of QED over all previous ones is that it naturally fits the kinds of choices of scenarios crucial to explaining the Bell's Theorem and triphoton experiments in a way which fits probability theory and underlying locality. (Locality is expressed mainly by our choice of Maxwell's Laws for A -- the laws which affect the P map of a classical field, under my extended P mapping described in a paper of mine at arxiv, the same in ordinary QED and here.)
The ability to fit experiment is a crucial advance -- and beyond that, this still represents ordinary QED reasonably well.
There are still some interesting related questions, like: (1) is there a nonzero probability of a primary "vacuum events"? Different versions of QED today already disagree on that. In MQED0, there is formally the same choice, to include vacuum events into the set of event types or not, as part of choosing a specification of theory. Both in MQED0 and in other types of QED, I wish the discussion so far had been more explicit about that important choice and how to nail it down. An important topic, for another day. Assumptions about the characteristics of the vacuum of space also come into play, with or without a possibility of vacuum events, in MQED0 AS IN today's QED.
We CAN jettison the macroscopic "collapse of the wave function," as described in my recent papers, but, as in simpler MRF models, do need to insert macroscopic assumptions about sources of forwards time free energy that we use in experiments, as a practical matter, for most practical experiments.
But not all.
A major next step after that is simply to show how MQED0 predicts ordinary atomic spectra (stable equilibrium energy levels).
In trying to answer the question which this blog entry starts from, I earlier considered
a different kind of model for MQED0, where a scenario X would consist of a graph (a Feynmann diagram or a Feynman diagram just for charged particles) and linear propagators to calculate p* at each node of the graph. But -- the Feynman diagrams I saw at Harvard did not have lines for 2 photon or 3 photon propagation! To add that would be a bit of a mess. We know than n-photon lines are predicted by the usual HI mathematics, and are essential to even a crude understanding of what we see with lasers and other phases of quantum optics. It is conceivable that a different kind of model, more graph oriented at least for fermions, would work as well or better... but for now, MQED0 seems like the right place to start from, and maybe to end with, until such time as we are ready to model a deeper level of reality in which the electron is not a perfect point particle. (That time may or may not come for us.)
MQED0 is an instance of a large family of dynamical systems worthy of mathematical study. But for this instance, as with any other form of QED, we will actually need to do regularization and renormalization, at some point. Probably not for predicting atomic spectra, to meet the basic laugh test, but for the sake of proving various things beyond that.
All for now. If I should die randomly tomorrow (always a consideration as we age), I do hope others will do justice to this extremely important next step in science. Or even if I don't!
And, oh yes... I have considered all the old obvious things, as described in part in the various papers. For example, propagators and P dynamics take care of the interference effects actually observed, from old slit experiments to time-delayed quantum eraser. A neat thing about this framework is that one CAN use propagators or master equations against media, not just the vacuum, as in
ordinary uses of Maxwell's Laws or quantum optics ala Carmichael -- an effective path for approximation.