that it looks as if it will be secret forever for this planet... just as chimpanzees may never develop a high technology, large scale societies of humans may also have limits (and badly designed web pages or voicemails which pretend to be AI may not solve the problem either).
But... here is some attempt to spill just a few beans.
Two weeks ago or so, I did a blog post on "FTL" possibilities, referring to a more establishment paper I
posted at arxiv.org, in the quant-ph section.
That paper addressed a well-established real world problem in mainstream electronics and photonics.
The problem is -- how can we simulate chips or other circuits, so that we can start to design them before we actually have to build them?
More and more industries have learned to design things in computer simulation before they pay the high cost of actually building things that don't quite work as intended, which need to be scraped, taking lots of time and money, and never really fully exploring the space of possible designs. People have pretty much caught on to what this requires, and they know that it's becoming ever more essential as designs become more complicated.
For most industries, they know how to do that kind of simulation. The system is a collection of objects in three dimensions, and they can build a reflection of the "same thing" in some kind of three-dimensional array
inside the computer. But there is a real problem here, as electronics and photonics try to continue Moore's Law, and push towards more and more devices on a chip, with smaller and smaller devices. At the nanoscale, you can't ignore quantum mechanics. But standard quantum mechanics says that the universe is actually 3N-dimensional at the quantum level, where N is larger than the number of electrons in the entire system. How do you do correct simulation of a system, governed by equations in more than 3 million dimensions? How do you fit that into your computer?
On the advice of a leading physicist, I decided to start putting out "little pieces" of a more general system,
one by one, so that people might understand. The arxiv paper asked: how far down can we go
if we have to use a computer model which is "lumped parameter" -- just a finite number of bits,
and a really simple digital type model like what devotees of finite state machines can understand?
That did OK. And the FTL posting here showed that that finite state approach can do important new things that no one has ever done yet.
But it can go only so far. At some point, we need to exploit more powerful computer simulatin methods, which represent continuous fields, like partial differential equatoin (PDE) simulators. (PDE simulations are
long the mainstay of many other fields, I get ads all the time from COMSOL, which has such a diversity of users -- and mathematicians scrutinizing when the simulations are reliable and when not, like some friends in Memphis, where a famous guy named Erdos often hung out.) In my last posting, foir example, I mentioned a classic new experiment by a guy named Zeilinger... where it offered a nice clean digital picture... but the experimentalist warned that it only works that when when the timing is exactly right, because of the continuous variable involved in quantum interference.
So: how can we push on to cases where we really need PDE simulations to capture quantum effects?
For example, how could we build the kind of simulators that would make Livermore able to give exact predictions of energy levels in their experiments on laser fusion in advance?
Well, I have some new results on that. Maybe I'll have a chance to write them up enough that someone else will know before I die of old age -- but maybe not. It's not as if I don't have other things I am responsible for worrying about, and things I have to deal with.
In the meantime... just FYI... here is the text (names removed for obvious reasons) of an email
I recently sent that world-class physicist I mentioned above, with a little more detail at the end about the context...
============================================
==================================================
First, I should apologize for being too glib about the P mapping in that draft paper I sent you earlier.
The truth is that my wife Luda and I did reinvent almost exactly the standard Glauber-Sudarshan P mapping in the 1990s.
More precisely:
contains the basic P mapping (but reinvented, only slightly different notation),
a dual "B-sub-W" mapping probably equivalent to Q mapping, and a "reification" version which shares some features with Winger-Weyl but does not really work out
due to subtle problems with operator norms. There are actually some useful new results there, in my view.
It was a great disappointment to me to learn soon later, from Walls and Milburn, that it was just a reinvention.
That kind of thing can be painful, but we need to accept it… and perhaps I accepted it more energetically than I should have.
I learned as well that a complete correspondence with bosonic field theories required more than just that original version.
Or, to put it another way, to develop mappings from ANY Hamiltonian field theory with a first-order Lagrangian to bosonic quantum field theory, one needs to
be more explicit about "phi' and "pi." Thus in 2003, we published a kind of extended version of the P mapping, complete with a proof of the basic
operator average theorem, in the International Journal of Bifurcation and Chaos, also posted at:
Yes, that is not just the traditional form of the P mapping. I did briefly notice that folks like Agarwal claimed to be addressing general bosonic theories
in their papers on the P mapping, and so I overestimated what is already out there in the general literature. It does seem that
our 2003 results are a significant extension. The results really are quite straightforward -- for anyone who knows the algebra of creation and annihilation field operators.
But this poses a bit of a challenge. I have gone a bit further this past month, to nail down the issue of equivalence in spectra
between classical PDE and bosonic field theories, but it really does require the 2003 paper as a prerequisite. Given the psychology and politics of these things..
well, I remember Bob O'Connell's suggestion about the New York Annals.
=====
This past month, the one book I reviewed in great detail here was Carmichael's Statistical Quantum Optics, volume 1.
I notice that he uses the term "super operator" for the dynamic operator in the master equation. (I had been using
some other term, and called it A… the operator from rho to rho dot, with rho considered as a "vector in (N*(N+1)/2)
dimensional space." ) For the case of classical Hamiltonian field theories, for the 2003 paper, we derived general master equations
quite different from the usual Schrodinger equation… but still conserving the same normal form Hamiltonian. Thus any function of the Hamiltonian (and of the momentum P operators)
would be a left eigenvector of A, EITHER for standard bosonic quantum field theory OR for classical statistics, even though the two sets of dynamics are different.
Neither is dissipative, unless modified to reflect some reservoir or boundary condition assumptions.
The beauty of this is that equilibrium ensembles computed classically give correct quantum mechanical spectra. Indeed, the operator
average theorem can be applied to the operators
delta(H - E0) and delta (P - P0), telling us that the usual energy eigenspace
projections (for zero momentum states) give us correct probes (duals or adjoints) for the density of classical states with the exact same energy.
Indeed, those functions of energy are all left-eigenvectors of A, representing stable equilibria.
Thus, for example, if the Skyrme of the nucleon/nucleus turned out to be correct, as bosonic field theory, then the classical PDE simulations of Manton and Sutcliffe would be
exact valid canonical QFT calculation for the energy levels of those systems -- even for nuclei made up of bound solitons.
As you know, I think I can come up with bosonic models more likely to match the real data there, but the equivalence if spectra is general across all possible models of this type.
=====
All of this is pretty exciting to me, but the prospect of writing it up is overwhelming, since it not only requires
knowledge of operator field theory (which seems to be growing weaker by the year in high energy physics, due to the popularity
of Feynman path integral formulations), but also the 2003 paper as a prerequisite.
======
Still, I would guess that you are one of the people on earth who can fully understand all this…
and this week, I have two panels to set up and a keynote talk to give at an IEEE meeting in Baltimore
(on a different subject, space solar power, which is a whole lot easier).
Best regards,
================
==================
The context is that Luda and I wrote a draft, upgrade_v11, of a paper proposing a new PDE type model
(much more complete than the draft version I had when I wrote a chapter for the festscrift of Leon Chua last year, Adamatzky ed) suitable for the kind of nuclear simulations I mentioned above, which, among other things, might well lead to new nuclear technologies (where "shake and bake" design would be too risky on planet earth). But to get it really right, a lot of math is needed... and our paper was very dense, with enough ideas for about ten papers. "Why not just publish a series of ten papers, with more detail on each idea?"
People who send proposals to NSF often get the same advice, when they cram so much into a proposal that
reviewers don't have enough detail to figure out any one of them. Paper one was easy
(see quant-ph and the previous blog). But paper number two, on equilibrium equivalence,
may already still strain the bit rate of processing of our society. Maybe it's possible to get so far as step two
of ten. Maybe someone else out there will follow through on what step one already opens up.
Or maybe not.
Time to get back to bed...
No comments:
Post a Comment