Thursday, August 8, 2013

Replace Schrodinger with Boltzman -- next stage in understanding quantum theory

I have no plans as yet to submit these new insights for publication, since they
require some heavy prerequisites, some of which are part of a long queue of things
I need to try to get out there. I will review a few of the prerequisites, but not with the complete detail, background and citation which people generally insist on.

Earlier this year, one of the world's top mathematical physicists drew my attention  to the "Haag's Theorem" problem. Searching the web, I found a nice tutorial:

John Earman and Doreen Fraser, Haag’s Theorem and Its Implications for the Foundations of
Quantum, Erkenntnis, Vol. 64, No. 3 (2006), 305-344, DOI: 10.1007/s10670-005-5814-y

Quantum theory is all about mathematical groups (in the more rigorous versions), but
it's important to know what a group of operators is actually operating ON. The same abstract groups can lead to different predictions, depending on what the group is assumed to operate ON (the "representation"). This is a fundamental problem. Without a better understanding of what the group operates ON, key predictions like scattering probabilities are mathematically undefined.

This is of special interest to me, as I consider the implications of the physics of time:

http://arxiv.org/abs/0801.1234 (a paper published in the International Journal of Theoretical Physics)
(That also explains why we use density matrices, not wave functions.. for solid empirical reasons!)

It turns out that there exists a mapping, commonly called the "P" mapping, from classical field states S to "density matrices" rho over the usual Fock-Hilbert space of quantum field theory (QFT). Classical field states S are just functions phi(x) and pi(x), over three dimensional space, governed
by classical Hamilton's field equations. It turns out that for all S, the classical energy of S equals
Tr(Hn rho(S))), where  Hn is the usual normal form Hamiltonian of canonical QFT.
This leads me to wonder: what are the implications for scattering if the elementary states
are taken to be just the states S? How does that change things? Luda and I have a much longer paper in draft which suggests some new specific Lagrangians (beyond anything I have posted) and a lot of other details... but not for now. (Well.. one detail. The rhos from P mapping are all bosonic, but one can derive fermionic systems as bound states of bosonic fields. This was once considered forbidden by God apriori, but substantial work in recent decades has provided many examples of 'bosonization.'
For example, see the many citations in Makhankov, Rybakov and Sanyuk.)

For now -- that's just a motivator for the general question: how can we describe what we know in a more general way, beyond today's 57 assorted varieties of QFT, focusing more clearly on the representation problem in a concrete way?

So here is one way to go.

Instead of starting from the Schrodinger equation (or any one of the 57 varieties), let's start from the more general, more basic assumptions that (1) we CAN formulate QFT based on density matrices rho; and (2) that "energy and momentum are conserved." More precisely, let's assume that Hn, Pk
and a few other operators O have the property that Tr(O rho) does not change with time, for any matrix rho. And then -- OK - one more specialized assumption, much more general than the usual Schrodinger equation. If we view the matrix (infinite two-tensor) rho as a vector in a higher order space, like "Fock Hilbert squared", we can think about a gain operator A which operates over rho.
We can assume dynamics where rho dot equals A rho. That includes the Schrodinger equation, the Louisville equation and other possibilities as special cases.  "A" could in principle be any linear operator (infinite four-tensor) over objects like rho.

In that representation, the conserved operators O all correspond to "vectors" o in two tensor space
which basically have the property that oA=0. They are left eigenvectors of A. But equilibrium "states" (ENSEMBLES of states) have density matrices rho such that A rho = 0. They are right
eigenvectors.

Now here comes an important trick. Conventional quantum statistical mechanics often tries to solve for  a universal equilibrium rho, a "grand canonical ensemble." BUT INSTEAD let us
try to solve for an equilibrium PROBABILITY distribution, an equilibrium FUNCTION of
possible UNDERLYING STATES. Pr(rho) = (1/Z)Tr(B rho), where B is an operator and Z
is a normalization constant inserted for convenience. (Z must be chosen so that probabilities
add up to one, for whatever the underlying states might be.) Note that B, like Hn and O in general
is DUAL to rho.

And then, the claim is that virtually any B of the form f(Hn, Pk, O...), with finite nonzero Z, is a valid equilibrium probability distribution, if we understand this as follows. f is any algebraic function which can be represented as a polynomial or as a Taylor series (like exp(-kx)),  B is the operator defined by the corresponding normal products of the conserved operators, and we restrict ourselves to the case where the resulting operator function is itself a conserved operator.  Let b be B expressed as a two tensor. Since bA=0, the ensemble Pr(rho) = Tr(B rho) is also an equilibrium.

The Boltzman equation is a special case of this.

One caution: imagine a simple universe, defined over a periodic lattice or grid or torus --
a rectangular finite universe where when you get to some coordinate x=x0 you "bend back" to x=0.
It is easy to see that any mix of Pr(energy, momentum,...) is allowable, so long as the energy level is
classically consistent with the momentum. Boltzman's law is just one possibility. The system is underdetermined. We tend to see a Boltzman distribution a lot in real objects, because of the
interaction with the external world beyond the object, beyond the x0 box. 

Now... from the general Boltzman condition here... we can easily ask about what happens when
energy or momentum is ever more narrowly focused on a particular set of numerical values for H,
Pk and whatever others. What happens is that B tends to project ANY choice of allowable rho states
into an eigenspace of Hn -- a joint eigenspace of Hn and the other operators O we choose here.
(And yes, we can choose.)

IN ESSENCE: THE EIGENSPACES ARE REAL, BUT THE EIGENVECTORS NEED NOT BE.

Whatever the allowable underlying states (rho) may be, they must be within these eigenspaces, but the underlying states need not be individual eigenvectors or rank one density matrices. We have the freedom to make different assumptions about what the underlying states are. FOR SOME EXPERIMENTS, the choice will have no empirical implications.  But for others, it will. Just as the Bell's Theorem experiments and theorems were crucial to appreciating some earlier theoretical work by Von Neumann.. perhaps we need that kind of empirical work here to probe and test what the choices are for the "representation."

I have seen an experiment by Yanhua Shih, which he called his "Popper" experiment," which might
possibly be relevant here. But I have seen other systems under design which may be more central to these issues.

Again, I view the choice of elementary states as the P-mapping of classical states S as especially
interesting and exciting.

There is some connection here to the tradeoff between "pure' versus "coherent" states
in quantum optics. In a way, I am proposing that the underlying states of nature may be "coherent" rather than "pure" -- but we need experiments to probe Haag's Theorem to be able to resolve this.

Best of luck,

     Paul

P.S. From the reasoning above, you can also see that the equilibrium spectra predicted for classical states S would fall within those eigenspaces; thus their spectral predictions would be identical to those of canonical QFT, or at least a subset of them. From Rajaraman, however, we know that the
energy levels predicted by Feynman path versions of QFT are different. I interpret this as the use
of the raw H, not in normal form, in those Feyman path calculations. Regarding the difference between canonical QFT and Feynman path QFT -- I view the empirical work of Bezryadin as  evidence that "the instantons are not there" and that canonical wins over Feynman path on empirical grounds.** In his book, the Quantum Theory of Fields, Weinberg gives a brief review
of how most of the elementary particle people shifted from canonical to Feyman path years ago -- based on mathematical convenience rather than empirical evidence. "Applied QED," the massive enterprise which underpins the electronics and photonics industries, relies much more heavily on
canonical QED, approximations to canonical QED and cavity or circuit QED (cQED) which
is basically halfway between normal canonical QED and the more radical ideas in the URL above.

(** OK: more details on this. See Belkin, Brenner, SAref, Ku and Bezryadin, "Little_Parks..."
Applied Physics Letters, 98, 242504 (2011); arXiv:805.2118v4 (2009); and
http://www.npl.co.uk/electromagnetics/electrical-quantum-standards/research/quantum-phase-slip.  It does need to be nailed down -- but are the QPS or instantons really there? A nice clean flat out comparison of canonical versus Feynman path does seem to be available here... though yes it does need some cleaning up.)






Also -- a lot of scattering analyses in the real world, as in circuit analysis, can be analyzed by applying the Boltzman function (or convolving multiple probability distributions) for the statistical equilibrium flow of events. No Schrodinger equation needed; only eigenSPACES. But the choice of representation (underlying states) probably does affect circuit predictions in some important cases, cases which probe the limits of what kind of technology is possible.

For the P-mapping case, many of us have derived the "master equations" for rho implied by the classical Hamiltonian dynamics; those dynamics imply a 4-tensor A which is different from the usual Schrodinger equations, but of course still conserving energy and momentum and such.

=========
=======

Additional Comment:

Some might propose: "Why don't we propose that the pure states -- the eigenstates of
H, Pk and the other conserved  operators are the underlying states?" Problem: the other operators include rotation operators, and we do not have a single set of elementary states. But if we define the elementary states as coherent states -- rho(S) by the P mapping -- we get a well-defined invariant set of elementary states. ALL of the relevant eigenSPACES are in play... and are valid computatoinal tools, but the theory itself is invariant and well-defined.


Where does the new framework actually change things, other than obvious issues about time we have discussed before? For electrons and ions, not much; the kind of picture "2**N"" described by Datta still applies, because the eigenspaces themselves are limited. (I view this as something like strong basins of attraction in a nonlinear dynamical system.) But for light and phonons, it may well be a different story. After all, the "Popper" experiment is in quantum optics, and the interaction of light and surface phonon polaritons (and work published by Bondarev) is at the core of some of the crucial new work. With very low energy levels, the quantization of emitters and absorbers of light and phonons trumps the rest, de facto, but merging of sources and sinks allows the coherence effects (let alone entanglement) which grossly change the fundamental rules. (I see nuclear connections as well, but am not so sure it is socially proper to mention them here, even in an obscure blog.)

================
================

So maybe I need to clarify what the new framework is saying here a bit.

First, the new framework does not require us to decide whether "pure states" (single photon)
or "coherent states" are the underlying states of light. In fact, the fundamental reality here is that we have a formula to compute probabilities of the ENTIRE STATE, THE TOTAL CONFIGURATION
of "a periodic lattice" (or of an object or of the universe itself).  That's basically what the modern
Boltzmann operator distribution is all about; my only change here is to emphasize that it is
a dual operator, not a density matrix in itself. I do that, because to calculate things like scattering
we need to ask about the probability of "states" -- states like "scattering states," entire histories.

I mentioned an important clear little paper by Supriyo Datta, which I left at work. (I wish I
could just type in the citation and URL right now.) He was discussing what it takes to explain and model a very basic phenomenon n transistors called "Coulomb blockade." Datta is the
great leader today of electronic calculations by use of "Nonequilibrium Green's Functions,"
NGEF, one of the two main methods used for such challenges. But in this paper, he described a situation where, he said, NGEF does not work...  where we need an extension. In NGEF,
we may think of a simple transistor as something like an atom, with N energy levels, connected to
contacts on either side. If we calculate Pr(i), where i = 1 to N, we solve the problem, in essence.
(Of course, it's a little more complicated... current flowing in and out in a kind of dynamic equilibrium.) BUT... when correlations or entanglements between electrons play a crucial role,
as in Coulomb blockade, we need to represnt the problem differently. Instead of describing the state of an electron by "i" (and tracking the movement of "an" electron), we describe the state
of the transistor by saying whether each level is occupied or not; with N levels, we get 2**N
possible states of the transistor as a whole.

The usual quantum Boltzmann distribution is a distribution over the whole 2**N set of system states.
That much I am proposing that we retain, when we describe the underlying laws of physics WITHOUT using the Schrodinger equation as the foundation. This should not be confused with the older Gibbs distribution, which, like simple NGEF, does not track the correlations. (As I type this... I realize that some folks might have REDEFINED NGEF to include extensions like what Datta wanted to call by a new name.... but that's not central to my point here.)

This distinction is especially important for light... where single "photons" are in some sense inseparable from the emitter and absorber which they connect. For rigor in analyzing various experiments, we need to remember to focus on probabilities for overall states... states which
in some sense sweep across time and space, like the legendary "scattering states" of Heisenberg and Dyson... and rho(S) can take care of making that well-defined.

===========
=========

August 12:

The framework above (P-Boltzmann) is sufficient and complete as a basis for analyzing experiments such as the Trew/Kim/Kong system attached to a quantum separator circuit inside a large
periodic lattice, or variations of the Bondarev or spiral nanorectenna work...

But what of ordinary VCSELs as in cavity QED, the Holt experiment, or the Popper experiment of Yanhua Shih? For them, I need to broaden it to more of an open system thermodynamics.

But with many practical goals, it is not necessary to consider all cases. Old QFT argued that
"all physics is embedded in scattering experiments."  OK, how about all physics being embedded
into large "passive systems" (described by P-Boltzmann), a source of voltage and ground, and
an infinite surrounding "space" which simply absorbs outgoing light and has a random
probability distribution of what it absorbs? Note that a laser, like a VCSEL, is in itself a
passive system... but needs to be plugged in. In this framework, including the laser as part of the passive system is tractable and reasonable enough.

That should do it. Am gearing up to APPLY this perhaps to the Holt experiment in more detail..
seeing of I can find his thesis online, for example...
ETSP for the polarizer more or less as in IJTP: incoming angle theta minus,
outgoing theta plus, internal theta-a... probability
(1/Z) sum of (f(theta minus - theta-a)*f(thetat plus - theta a))
where f(x) = Kroneker delta (x) + a*cos**2(x)..
and absorption a possibility of porbability scale of a.
The constant "a" is an indication of   degree of perfection of the polarizer.
The Kronecker delta might be a little fuzzy. This may be good enough...
but data needed.

=====

It is interesting to consider a different case... like life in an ocean...
where the outputs are chemicals and inputs either chemicals or sunlight.
In fact, even P-Boltzmann itself ("closed system") is a nonequilibrium formalism,
giving probability distributions for systems which do not settle down
(e.g. as in "classical chaos"). My recent excursions into currents drawing energy from convection from sunlight... well, another way to go with such analysis.

That reminds me... how for most electronics/photonics uses of P-Boltzman, the other conserved
quantities of real important would be things like numbers of atoms of different types...
as in chemistry. This does incorporate all he stuff Prigogine talked about that was useful,
yea unto chemical potentials. But not his version of entropy production, which even he admitted did not work.

No comments:

Post a Comment