Yesterday I gave a talk at SPIE (www.spie.org) proposing a new experiment, probably to be performed soon, where standard quantum mechanics does seem to predict a way to do faster than light (FTL) communication. This was my second talk on this subject; about a month ago, at a conference in Princeton for leaders in quantum optics and communication, I ran across a team of experimenters which can do the experiment, using a new low-cost technology to produce entangled photons.
The talk last month went very well, and it went well enough yesterday -- but, thanks to feedback, I see how to explain some key points more clearly. (My SPIE paper and two related recent papers are easy to find at http://arxiv.org, searching on Werbos.)
The new experiment is basically a straightforward enhancement of the famous "Bell's Theorem" experiments, which helped give rise to all the huge new efforts in quantum communication, quantum computing, quantum imaging and so on. My paper begins by explaining the classic review article by Clauser and Shimony, which described the first decade of Bell's Theorem experiments, predictions and analysis. But it does more than just explain all that. When people say that "quantum mechanics correctly predicted those experiments,' where did those predictions come from? They come from standard assumptions made by Clauser and Shimony -- but that paper did not include all the algebra! My paper used more elegant mathematics to show how their assumptions about quantum mechanics led to their famous prediction formula (R2/R0=(1/2) cos**(theta_a - theta_b)) -- in just a few pages.
The most important assumption was that polarizers produce an effect called "collapse of the wave function" or "operator projection." Using that exact same assumption, I calculate the predicted THREE-PHOTON counting rate, for a certain set of polarizer angles. These assumption, traditional quantum mechanics, predict that the counting rate will be different, depending on which polarizer/counter the
light reaches first. My paper also mentions three different models, based on simple probability theory, which also make correct predictions of the Bell's Theorem experiment -- but a different prediction for the triphoton experiment.
Thus: when the experiment is done, there are only three possibilities. Maybe it will agree with the version of quantum mechanics which Clauser and Shimony used. Maybe it will agree with my new MRF models (especially MRF3). Maybe it will disagree with both. All three possibilities would have huge implications.
If Clauser's way of computing the quantum mechanical prediction does not work here, does that invalidate quantum mechanics AS SUCH? No. Neither did Bell's Theorem experiments invalidate classical field theory.
Clauser's prediction of the Bell experiments relied on 'the collapse of the wave function," the model of the polarizer as a projection operator (or, really, a superoperator, which is different -- explained in my paper.).
You can still believe in the Schrodinger equation in Fock space without believing in the collapse of the wave function. Likewise, the Bell's Theorem papers use the powerful loaded word "causality" for a specific assumption about statistics, coming from a kind of untutored common sense, which certainly cannot be derived as a consequence of classical field theory (PDE).
HOWEVER... if the methods which Clauser used break down, what can we do to actually predict the crucial function here, R3/R0(theta_a, theta_b, theta_c, p) (where p is one of the six possibilities for which polarizer/counter pair is reached first)? The only alternatives NOW on the table are the three MRFmodels
I have given (and the modified nonlinear superoperator I propose in my paper as an alternative to the usual one.) For now, if thew old way fails, my four alternatives are the only game in town, and I do hope they work. (By the way, Clauser is a great guy, and I hope I don't make it sound as if I identify him with the calculations he used in this one paper in the past. )
Of course, many people would be most excited if R3/R0 DOES depend on p. My first concern is to help make sure we have DATA on the whole function R3/R0, for the type of entangled photon source I discuss in the paper. Then we can say more about what predicts it.
If R3/R0 doesn't depend on p, what then? No FTL. But maybe some backwards-time communication of information, new possibilities for quantum computing and other such things. Maybe even more than FTL, ONCE we understand enough to correctly model such things.
At the conference, I heard a talk form Leuenberger, who has ANOTHER way to generate multiple entanglement. (So far, everyone still says there are only three groups -- one in Maryland, which has the new method, one in Austria, and one in China which gets lots of money in Sichuan province where people
are directly aware of major national security applications.) In essence, he wants to use a big lattice
of wires of light and cavities, and has a new way. He recommends that I read a book by Joos, for a different
way to model the kinds of polarizers which are a crucial part of "spintronics' and this kind of massive quantum computing, which may indeed be enough to break the codes now in use. I do not know how solid the empirical data is yet on the ways of modeling those more complicated systems. A talk today from U. Nanjing discussed work they do there on this kind of optical neural network, using refractoriness as a way to ad tunable weights and get universal computing; however, they still think they are stuck with linear quantum computing. If triphoton works, we are not.
All for now -- but lots more to say.