Quantum Time and Energy 101
My response:
In fact… it has been a long story, and much has never been pulled together in one place, except perhaps for specialists. I have been lucky to be physically present at many steps of this history, so I have some duty to give that kind of overview myself. It is very hard not to give a lot of details and explanations, but I will try hard to simplify. I will focus on the core issues, not the huge body of spinoffs from philosophy to engineering, except as they feed back to the core. Just a little humor for mnemonic purposes – but do not underestimate how amusing the true story has been.
I. The Classical Era.
Classical v1. The Lorentz picture (circa 1900). All of objective reality consists of atoms, electrons, and the electrical magnetic field, existing in three-dimensional space. To specify the state of reality, specify: (1) where in space each atom or electron is located, its velocity and its angular momentum; (2) specify a single real number V(x) at each point x in space, where V is just the voltage at that point in space; (3) specify THREE numbers (B1(x), B2(x), B3(x) ?) at each point in space, to specify the state of the magnetic field. Knowing the state of reality, and applying Newton’s Laws and Maxwell’s Laws, we can in principle predict the entire future history of the universe, starting from that known state. They thought.
Classical v2. The Lagrange/Einstein picture (circa 1920). Get rid of the Greek-style point particles, and do it all as fields. The electron is just a vortex or wave or pattern or soliton in an additional force field psi(x), governed by Schrodinger’s original equation. Gravity does not obey action at a distance; it’s mediated by another field g(x), which is just a four-by-four matrix of numbers at each point x. Atoms and other stuff are just patterns in some other set of fields which I will call phi(x) right now. All of objective reality consists of continuous fields. If we define PHI as the set of numbers V, B, g and phi, then we can specify the state of objective reality simply by specifying PHI(x), the set of these numbers, at every point x. To predict the future state of the universe – we use a set of dynamical equations called the “Lagrange-Euler” equations. No more Newton-style action at a distance. As the stock market rose, so too did their hopes of filling in this program. (They hoped to learn just what numbers we need to specify phi, and to learn the complete Lagrange-Euler equations to cover all the fields.)
II. The Copenhagen Era
1. THE GREAT CRASH. Schrodinger’s equation works brilliantly to predict the colors of hydrogen, an atom with just one electron. But it fails completely to predict the colors of helium, an atom with two electrons. In the mid 20’s -- someone REINTERPRETS Schrodinger’s equation by solving for psi(x1,x2), where x1 represents the location of the first electron and x2 the location of the second electron. This makes no sense in the Lagrange-Einstein picture… but it works, creating a shock that physics has yet to fully recover from. (DeBroglie was sending me letters about it in the 1960’s.) Also, it’s a real problem even today for electrical engineers, trying to predict where a million electrons are likely to go, when quantum mechanics wants them to solve for a function psi of three million variables.
2. THE REICH: Heisenberg appears. While Schrodinger, Einstein and DeBroglie all reel in shock, Heisenberg and followers point out that this fits what he has been saying all along. He proposes that the complex number psi be reinterpreted, not as a field, but as a kind of “wave function of information,” representing our KNOWLEDGE of the universe, and NOT the objective state of reality. Sic transit objective reality. Many German existentialists and yogins join the bandwagon.
2a. People often say that the “First quantum mechanics” was this new use of the Schrodinger equation to describe electrons (and protons and other such particles), in accord with Heisenberg’s recipe. The “second quantum mechanics,” or “quantum field theory” (QFT), was invented in the 1950’s, and extends quantum mechanics to account for “everything” – not only particles, but electricity and magnetism and the newly discovered nuclear forces. (Ooops – what about gravity? Not for today.) But
that’s not the whole truth. Heisenberg was writing about QFT from day one. In the 1950’s, people figured out how to actually make it work, more or less – to give well-defined predictions (probably well-defined) for the case of charged particles, electricity and magnetism. The four people were Julian Schwinger, Richard Feynman, Tomonoga and Dyson. (I was a student of Schwinger…)
2b. Heisenberg’s picture, aka “the Copenhagen picture,” still taught as dogma in many places today (especially in introductory courses):
2b.1. Get rid of those fuzzy fields, and go back to particles, at the foundation level.
A possible “configuration” X of the universe is defined simply by specifying the location and a few discrete state variables (like “spin up’ and “spin down”) for all the particles in the universe. Let X be a possible configuration of the universe…
2b.2. But there is no real universe. There is only our mind, our consciousness. The rest is illusion. There is only a recipe for how to make predictions. It is a three-step recipe: (1) Follow our encoding or setup rules to translate your knowledge about how you set up the experiment into psi(X(t-)), your knowledge about the time t- when your experiment starts; (2) use the NEW SCHRODINGER EQUATION
psi-dot = i H psi to calculate psi(X(t)) for later times t, to map your knowledge about time t- into knowledge about later times; (3) use our “observer formalism” or “measurement” rule to predict the PROBABILITIES of POSSIBLE outcomes of the experiment. The rule can be written as
E(m)= (psi-transpose)M(psi-transpose), where E(m) represents the expected value of the quantity m which you measure at the end of the experiment, where the wave function psi is interpreted as a kind of vector in an infinite-dimensional space, and M and H are matrices over that space. Anyway, you can see that it’s a mess. The recipe IS the theory; it’s all there is – or rather, all that isn’t. By the way, “psi-dot” simply refers to the derivative of psi with respect to time. To actually use this recipe, we have to add some kind of theory about what the matrices M and H are; a key achievement in the 1950’s was to find a matrix H which works for electricity and magnetism and charged particles.
2c. I later met Heisenberg’s boss by accident, on a metro train to the DC zoo, as we both went to visit some pretty hairy people. He said that Heisenberg didn’t really believe in that measurement rule, or in the three-dimensional universe it predicts, but such crutches are needed to get the attention (and funding) of the deluded souls who think that there is a three-dimensional universe in which to make measurements. In his view, the Copenhagen folks were just deluded popularizers. One could actually characterize my new stuff as a way to implement his true viewpoint (i.e. just getting rid of the observer formalism), but that’s not how I got there.
3. TRIUMPH OF THE NEW ORDER. Is it possible to make sense of this mess somehow? No it’s not, said Heisenberg; that’s the whole point.
Even the “first quantum mechanics” was a mess, but it was the only thing which worked in predicting the colors of helium, and many other things which followed.
Thus in 1926 there was a great Congress of at Saclay, as important to physics as the Continental Congress was to the US. Instead of a declaration of independence from Britain, it came up with a declaration of independence from that old classical idea of objective reality. Since this was the only recipe which worked… it became dominant across almost all of physics for a long time to come.
People have sometimes asked me: ”Does any of this make any difference? Isn’t this just the same old thing – if a bird signs in the forest, but no one hears it, did it really sing? People can look at either way, big deal.” No, it’s not the same old thing. The idea that the bird was really there is not tenable, says Heisenberg. The notion of objective reality simply does not work, empirically. We have to give it up. We cannot explain why the recipe works, in terms of objective reality, because there is no objective reality there to explain it in. It cannot be reduced to such a three-dimensional way of thinking.
III. The Free French and Other Resistance Movements – Up to Von Neumann
Albert Einstein, Ayn Rand and VI Lenin all wrote passionate manifestoes objecting to the new order here, and calling for a return to the concept of objective reality.
Schrodinger was truly aghast at what had been done to his beautiful equation, and expressed concern for the damage that might be done to human sanity as a result of the new order. (One of his best students gave up physics, became a monk, and taught at Georgetown – where I have heard him give his lectures.) DeBroglie also led a center of resistance for a long time.
Einstein’s initial response had two parts:
1. The philosophy itself was objectionable. After centuries of making progress in trying to understand objective reality and how it works, why give up and drown in solipsism?
2. More positively – we COULD explain why the recipe works, and come up with a more realistic understanding if we knew enough and tried harder…. The “psi(X)”
looks a lot like Pr(X), a probability distribution, and maybe this recipe could be explained as nothing but an emergent statistical outcome of a “Classical” type of theory.
Thus for many decades, many top people worked on trying to find such a realistic explanation. De Broglie and Vigier, Bohm and Einstein, Norbert Wiener, Wigner and others, all did important work. They all achieved important insights, some useful in other areas. (Certainly engineers still use mathematics developed by Wiener and Wigner for this purpose. I tend to view Glauber’s P and Q mappings, used in quantum optics, as a kind of byproduct of Wigner’s work.) But it began to seem more and more like a hopeless quest for a Holy Grail, or like making war with a cloud.
Until Von Neumann. Many people (including me) still view Einstein’s friend and colleague, John Von Neumann, as the number one mathematician of the twentieth century – even though he did not live up to some folks‘ standards for purity and chastity and other symptoms of autism. He was also perhaps one of the saner public figures of that century. (Not that Feynmann was chaste in any respect. Von Neumann did not go to THAT extreme.)
Von Neumann made two really essential contributions to understanding this stuff.
First, he analyzed the whole idea of an observer formalism in a more logical way than others had before him. He asked “obvious” questions like – who observes the observers? What if there is a chain of experiments within experiments? Do cats or birds qualify as observers? Do humans? His insights were quite important, and directly related to real experiments today, but in a quick overview I’ll have to skip that part.
Second, he used a crucial mental skill of mathematicians which the larger really world needs to understand better. Mathematicians use the term “reduction ad absurdum” … but I’ll to give a feeling for it in simpler terms. It often happens, when you want to do something really hard, that your best chances comes from trying to really rigorously prove that it’s impossible, under the broadest possible assumptions covering everything people usually try – and then USE THE LOOPHOLES, the limits of the assumptions, to figure out how to actually do it!
(And if that seems to be hard, broaden the assumptions to cover the first set of new things.)
In a classic book (which I cite in some of my published papers), Von Neumann proved that it would be impossible to exactly reproduce all the predictions of (Copenhagen) quantum theory starting from any reasonably behaved realistic model. The key problem, he said, is with the usual form of the CAUSALITY assumption. That’s what we need to work on. It’s a shame he never really had a chance to do this.
IV. Princeton’s Revenge: Many Worlds Physics, The First Really Major Reformulation of Quantum Field Theory (QFT)
Shortly after the deaths of Einstein and Von Neumann, their colleagues at Princeton published easy “already solved” ways to solve problems which had disturbed them right to the end. John Wheeler got the Nobel Prize for developing consistent Lagrange-Euler equations to combine gravity, charged particles, electricity and magnetism. (That’s part of Classical.v2, back in Section I.) Hugh Everett III,
a graduate student working under Wheeler, developed a new formulation of QFT
which has become ever more popular through the years.
Everett’s idea was basically very simple. If you look back at section II.2b, the Copenhagen recipe, why not simply throw out the setup and observer formalisms,
and just keep the new Schrodinger equation: psi-dot = i H psi? Instead of trying to explain the WHOLE recipe as a kind of outcome of statistics, why not bite the bullet and declare that psi is a real, true field? Why not say that the universe we actually living is the multidimensional space of possible vectors X, which Von Neumann called “configuration space”? And then, derive just the observer part as a statistical outcome of that new theory of the dynamics of the universe. That derivation was the main part of his PhD thesis. His thesis became widely available in a book edited by DeWitt on Many Worlds Physics, from Princeton University Press.
Everett argued that we can definitely go back to the idea of objective reality – but only at a price. The price is that we have to accept the idea that the cosmos we live in is much larger than the small three-dimensional slice of it that we see every day.
For many years, most people assumed that the difference between Everett’s theory and the Copenhagen theory was just too small to measure. DeWitt showed that there is SOME difference – but if it can’t be measured, it’s really just a matter of interpretation. “If they both lead to the same predictions of nature, who cares?”
Many philosophers have questioned whether Everett really proved what he claimed to have proved, and tried to do better. In my view, none of them really proved much.
Years later, David Deutsch of Oxford showed how the way of thinking in the many-worlds model could actually be used as a way to develop new technology. Parallel or multicore processing lets us compute things that old-style serial computers could not do, plodding along one instruction at a time in a single thread of computation. Why not do still better by mobilizing large numbers of parallel universes within the larger cosmos, to perform a computation? Deutsch was the real father of the modern approach to quantum computing, which derived from his papers exploiting this approach.
In another strand of work – some people re-examined the older ideas of DeBroglie and Bohm, and appreciated that they could not work without adding functions of more dimensions. That provided a backdoor way to reinvent the many-worlds approach. So far as I know, Everett’s version is the simplest and most viable, but there is recent work by Hiley which I have yet to study, in the same general category.
V. Bell’s Book and Candle: Back to an Experimental Approach
Many people viewed Von Neumann’s work as a proof that quantum mechanics never could be explained the way Einstein wanted to. But diehard realists pointed to a certain gap in the logic here. Von Neumann proved that a traditional classical model could never replicate ALL the predictions of quantum theory, but no one has ever tested ALL the predictions of quantum theory. They asked: can we come up with a SPECIFIC experiment, where we can prove that quantum mechanics predicts one thing but all traditional classical theories would have to predict something else?
This challenge was finally met by Clauser, Holt, Shimony and Horne (CHSH), who also performed the first experiments to follow up on this. In the spirit of modern physics, their theorem is usually called “Bell’s Theorem” and the experiments were commonly termed “Aspect” experiments, after Alain Aspect. In his seminal book, The Speakable and Unspeakable in Quantum Mechanics, J.S. Bell cites the original papers of Clauser, Holt, Simony and Horne. He provides some important new insights, but also puts his own spin on this subject. I was lucky enough to be the same place as Holt as we were both getting our PhDs, and I saw the original papers long before I saw Bell’s book.
The CHSH theorem proper defines a class of experiments. If quantum mechanics correctly predicts these experiments, one can rule out ALL theories of the universe which are “local, causal, hidden variable theories.” Those are their words.
If you go back to the original source, their papers, you will see that that’s what’s actually proven, not any of the garbled versions that sometimes appear in popularized accounts. The theorem and the experiments only ruled out theories which have ALL THREE properties. Thus to understand what’s really going on here, it’s crucial to know what these three properties are.
By “hidden variable,” they basically just mean a theory which assumes SOME kind of objective reality. The Copenhagen theory doesn’t, so it’s automatically OK by this theorem. But you can’t have a theory of physics which fits these experiments
and talks about objective reality unless it violates one of the other properties – “locality’ or “causality” or both.
By locality, they mean “no action at a distance” in ordinary three-dimensional space. The many-world people and their allies say that they are still allowed, because their theories do allow action at a distance. There are many, many papers on “nonlocality” as an alternative to Copenhagen.
Back in 1973, I pointed out that the orthodox formulation of “causality” can also be revisited. I was not aware of Von Neumann’s discussion of the same point, but in any case I took the point further.
Years ago, there was some reasonable debate about how these Bell’s Theorem experiments actually turned out. In some cases (like Holt’s original experiment), it’s not entirely clear to me that the results are consistent with Copenhagen. But there are certainly some modern set-ups, like the experiments of Yanhua Shih, where it is very clear that local, causal hidden variable theories ARE ruled out. It is good enough to have ONE decisive, replicable experiment to rule them out.
Therefore – to come up with a theory of physics which fits experiment, and is based on an idea of objective reality, the theory MUST violate locality or conventional time-forwards causality or both. Those are the only possibilities.
In particular, to resurrect the Einstein program, it is necessary to violate traditional ideas about time-forwards causality.
VI. Backwards-Time Physics: The Latest Chapter
The latest installment of this story, in 2008, is spelled out in my paper in the International Journal of Theoretical Physics, posted at http://arxiv.org/abs/0801.1234. There are several key points it makes and argues in detail:
1. The world of optics and electronics, including quantum computing, has already performed decisive experiments which rule out orthodox Copenhagen physics. It is not just a matter of interpretation.
2. In the many worlds theory, the new Schrodinger equation is basically symmetric with respect to time. We cannot correctly deduce an observer formalism which is grossly asymmetric with respect to time, if we start from assumptions which are time-symmetric! Thus we must logically give up Everett’s attempt to do this. The only way to explain the practical success of the observer formalism, in most cases, is to invoke BOUNDARY conditions – to invoke the forwards flow of free energy used in all of our experiments so far.
3. If we adopt the view that the laws of physics are completely symmetric with respect to time, except for these boundary conditions, it is easy to reconcile the CSHS experiment with the Einstein/Lagrange program. In fact, the highly precise experiments performed by Shih were actually designed based on ideas from Klyshko, who used a backwards-time approach to optics, fully accounting for entanglement effects in three spatial dimensions.
4. None of this proves or disproves the possibility of building a bidirectional power supply, as described in http://arxiv.org/abs/1007.0146. But we do build a system which should be able to generate electricity from ambient infrared heat radiation,
according to Copenhagen or time-forwards physics, then my new circuit, the quantum separator (QS) would provide a very graphic and decisive experiment, to decide between classical time-forward causality and backwards-time physics. As well as a new energy source. The logic in this paper makes it very clear how that experiment should be expected to come out, but a strong experiment would still be very helpful in making it clear where we now are today.
5. The paper did not really take sides on the issue of many worlds versus the Einstein picture. It mainly argues that either version of backwards time physics dominates over older versions of quantum theory.
No comments:
Post a Comment