This past week, and a month ago, I attended two conferences
(SPIE and Princeton) which represented the best current work anywhere, by
certain metrics, in the areas of quantum computing and secure communications. I
learned a lot about where things are really going now – both from the talks,
and from things which were said by people who attend “all the conferences.”
There is a lot of grumbling about funding now. That was not
true about five to ten years ago, when I went to bigger conferences on the
subject, and quantum computing was really unusual for the amount of money
pouring in, in the US, the EU, China, and other places. Here, people said “Since DARPA and NSF
cut back , a lot of us have been squeezed very badly, and have changed what we
are working on.”
But of course, there is still NSA and IARPA, and at least
one talk gave them great credit for their ongoing support – though others said
they have become a bit short-term product oriented in a way which makes it hard
to do major breakthroughs. But funding in China is a different story.
In talks people said – “It comes down to how many qubits you
have. For really solid secure communications, you only need about 8 or 10
qubits. But for quantum computing, to factor large numbers and break codes, you
need more like 100. People have done 8 to ten, but with 100 not in sight there
is not so much concern.” Thus quantum communications has become a practical,
industrial strength area, but computing is another matter.
8 qubits or 8 entangled photons? That needs pinning
down. They are certainly related,
but yes, a more precise story is possible. Sorry.
People have persisted with the story that only three groups
in the world have ever produced more than TWO entangled photons – the group of
Yanhua Shih at Maryland, the group of Anton Zeilinger in Vienna, and the group
of a former student of Zeilinger who now has much more funds than the other two
now that he has moved to Sichuan province in China and helped them develop an
edge in space communications, among other things. Shih and Zeilinger have
gotten up to three entangled photons (“GHZ states”), but a guy who visited the
Sichuan lab reports that they are up to 8.
But 100 (and code breaking) is not so way out at this point.
Shih has announced a new way to produce entangled photons – but an issue about
funding to move it to the next stage. Gerald Gilbert of MITRE said we need to pay attention to the
work of Fister of UVA, who has a way to generate 1000 qubits, but as yet can
address only about 60 of them, apparently with photonic lattices. Michael
Leuenberger, a former NSF grantee, descdribed his new approach to generate
hundreds of entangled photons in lattices of cavities connected by “optical
wires.”
Someone else has been paying attention. A major group at
Nanjing University has had experimental results with optical lattices -- cavities
connected by a lattice of waveguides),; this was reported by their
collaborator, Xiaodong Li, of CUNY Brooklyn. They can adjust or adapt refractory coefficients in the
coupling of these lattices. As with adapting weights in a neural network, they say they have proven this gives
them universal computing power. But still they refer to this as a kind of
“linear quantum computing,” echoing Dowling’s talk a few years ago about linear
quantum computing with optics.
My own paper (previous post) raised questions about just how
linear optical computing has to be. Are polarizers best represented by the
usual superoperator,
Assumed in previous theory, or will new experiments force us
to change the model to a nonlinear superoperator or to something completely
different (MRF)? The n nonlinear superoperator model I propose would only
involve a “small nonlinearity” – but a small nonlinearity like the small
nonlinearity in neurons in neural networks, enough to give truly universal
nonlinear computing power. I asked Leuenberger: “With all this spin and
spintronic kind of stuff in these lattices, don’t you have polarizers in here
too? How do you model them, for purposes of systems design?”
He referred to a book by Joos, which I clearly need to
follow up on, though I have no idea as yet how that affects things. It seems
really important here that the underlying physics is not well known enough to
justify high confidence in what comes out of theory when it is not checked
empirically.
Yesterday (Friday, the last day of the conference) also
included a tribute to Howard Brandt, who set up the SPIE series of conferences
in quantum computing and information, but died suddenly just a few weeks ago.
This particular track of SPIE was initially only attended by about 15 people,
but grew to 35 to 50 under him. More important, they said, it is truly unique
in reaching out to a cross-cutting interdisciplinary perspective, bringing
together the diversity of backgrounds necessary to look beyond short-term
things to larger directions and possibilities in the field. Howard also played a key role in much
larger meetings, such as the earlier DARPA QUIST meetings or interagency review
meetings which I went to in earlier years. (I stopped going when they moved
them far form Washington, and travel limited me. This one was in Baltimore.)
Howard was clearly a great guy in many ways, but… on to details of this one.
It is clear that the old idea that we can encode our
knowledge into wave functions, as opposed to density matrices, causes confusion
even now, even in mathematicians and quantum theorists oriented towards
practical systems who are doing serious important work. My IJTP paper a few
years back was pretty explicit about this basic issue, but a hundred years of
misleading introductory courses have not been overcome yet. Many theoreticians still simply assume
that macroscopic objects like polarizers can be adequately modeled by
operators, even unitary operators, when solid state empirical reality shows
that we need to encode information into density matrices, whose transformations
are superoperators, not operators. My SPIE paper and my other recent arxiv
paper on Bell’s Theorem experiments spells out the simple algebra involved. One important source on this subject is
Carmichael’s two-volume book on quantum optics.
Years ago, I was really sad when a guy I knew named Kak fell
into that same intellectual trap. He read lots of stuff about quantum
computing, from quantum theorists who were imagining the world in terms of wave
functions. At the time, he argued people needed to pay attention to the
question “How do we know what the starting phase really is?” From the viewpoint
of density matrices, this is a
silly red herring, and working quantum computing systems (like seminal early
stuff by Gershenfeld) did require translating the early concepts into real
stuff with density matrices. I strongly respected Kak’s interest in Upanishads,
but…..
But at this conference, a woman named Darunkar from Oklahoma
presented some recent work by Kak which I was not aware of, on secure
communications, and presented an extension which was very exciting to some of
the folks in secure communications. While most of us in the US have been almost
mesmerized by the beautiful vision of quantum ones and zeros, and excited by
our ability to work out all the many things which can be done in that mental
space, she quietly suggested a way to get beyond the limits of that space.
These photons are not really just ones and zeroes, after all; they may be at
any linear polarizaton angle from zero to pi. If an eavesdropper Eve doesn’t
know what ANGLE is used to define zeroes and ones, she may be badly confused if
she uses the same angle.
“And so,” she said, “which not exchange theta along an
expensive triple channel like what Kak devised, which is totally secure, and
then simply encode using theta for some stretch of time, to achieve absolute
security at lower cost”?
That was a good lead-in to the next talk, from a
collaboration of Beijing and Australia, which was far less lucid and
photogenic. (I halfway wondered
whether I was the only one to catch the basic drift of it, and that only
because I was so fully engrossed in the previous presentation.) My guess at the
thoughts I think I heard, amidst many many slides of small print, equations and
lemmas with a thick soft-spoken Chinese accent: “Why just do one theta
rotation? Why not break the data into blocks, and try out random unitary
transformations – rotations – in the higher dimensional space which defines a
block of data? Why do Kak stuff? Bob can give Alice a unitary transformation when
he first meets her. Then, after each transmission, they can end with an encoded
definition of a new randomly generated unitary transformation to be used next
time. As it keeps changing, there
is no way that Eve can keep up.” (Especially if Eve is thinking all in 1’s and
0’s!)
In general, both here and in Princeton, I notice that a lot
of research is focusing on how to keep out Eve – in Europe and in China. I wonder how the influence of Snowden
has affected research in those countries. At Princeton, I heard of very
sophisticated and rigorous German research , in the land of 1’s and 0’s, where.. people seemed to say
“We used to worry about simple kinds of eavesdroppers, but now, after Snowden,
we are doing research on what we can do to keep from being overheard by the
very most intelligent and diabolical eavesdroppers which could possibly exist.”
At times, I was reminded of a brief passage in Clancy’s
one-sided but interesting book Threat Vector, where the US is nearly destroyed
by a cyberattack from China – which was possible, he said, because of all the
bright young Chinese students whom we educate but refuse to hire here, who are
given almost no choice but to return home. (Soon after reading Clancy, I immediately grabbed an
antidote, Stapleton’s
Novel First and Last Men, where muddling thorugh to a
breakthrough in US-China relations was crucial to creating a prosperous world
civilization which endured for 4,000 years after that. And then died, due to
overdependence on fossil fuels.)
Lots of other things here. There was a fascinating, lucid
and unique talk by Micheel Frey of Bucknell which I need to follow up on –
about extracting energy from lattices through global or local-quantum
operations, relevant both to decoherence and to energy as such. There were practical talks by Alan Mink
of NIST, on how to do ordinary error correction together with quantum QKD,
aiming to move from megabit rates now possible to gigabit, with a search for
polarization codes to make that possible.
(He had practical comments on GPU (SIMD) versus FPGA of interest to me
for other reasons.) The Mitre talk
on remote sensing talked about the need to search on suitable diffraction-free
light patterns to make the power of quantum imaging really work in remote
sensing (1/N versus classical 1/sqrtN) means 3 times the resolution if you get
to 10 entangled photons.) I wondered why they did not use more automated search
technology to find a good pattern.
John Myers (retired from Harvard) commented, among other
things, on how it was proved in 2005 that you can’t just encode your
information into a wave function. I have the impression that this is another
side of the wave function versus density matrix story. He also argued for the
need for adaptive clocks, when clocks are crucially embedded into computer
systems and need to respond to what is actually happening in the computer
system. Would this extend even to
clock cells like in the nonspecific thalamus? I emphatically said I don’t know,
and haven’t seen discussion of that before.
One guy asked: “How real ARE those Bell states that Yanhua
Shih’s group generated with thermal light?” I began to hope that the new
triphoton experiment can be done IN PARALLEL in Maryland and in other places
with SPDC apparatus doing multiphoton states, so that results can be announced
jointly, so as to prevent nasty “antibodies” forming, as in the movie Inception,
making it hard to sustain the new direction even after it is confirmer LATER in
another place. Life has taught me a lot about how important the “mental
antibody” phenomenon can be. Will
our entire civilization be destroyed by a kind of mental autoimmune disease? It
seems that way at times.
Best of luck,
Paul
=============
Added: of course, in quantum computing, "decoherence" really disentanglement) has been a crucial huge barrier. Quantum demolition spaces haven't been the breakthorugh many hoped, and quantum error correct seems to have exponential costs offsetting quantum improvements n performance. Quantum modeling is crucial with or without quantum computing at nanoscale, simply because things become quantum mechanical there, like it or not.
One guy asked me offline: "What of the cavity or circuit QED breakthrough hope in decoherece?"
Yes, that's one of one to three. (For reasons of time, I won't check my notes on others.)
Yes, I said, from VCSELs we know we can use cavity effects to suppress the usual "unavoidable background noise." But to do it, we need proper models for systems design. The usual infinite resrevoir of noise model, which Carmichael refers to, won't do it; it's basically MORE noise. That's why new models ae crucial in these new breakthrough quantum lattice semi-optical computers. (I have to review Rabi-Hubbard as one small part of this, as Princeton folks taught me. And of course work by Glauber and Scully on noise, the theme at Princeton.)
Many of the folks we used to fnd in electronic quantum modeling at NSF disappeared to the EU, because of the great funding there, albeit more near-term industry market oriented.
There were other interesting offline conversations in both places, but must run.
At some point, I may need to write a paper on the type of quantum NP hard quantum Boltzmann machines which easily become available if my MRF models are vindicated. Also more on the x y z aspects of some types of triphoton experiments. For later.
Added: of course, in quantum computing, "decoherence" really disentanglement) has been a crucial huge barrier. Quantum demolition spaces haven't been the breakthorugh many hoped, and quantum error correct seems to have exponential costs offsetting quantum improvements n performance. Quantum modeling is crucial with or without quantum computing at nanoscale, simply because things become quantum mechanical there, like it or not.
One guy asked me offline: "What of the cavity or circuit QED breakthrough hope in decoherece?"
Yes, that's one of one to three. (For reasons of time, I won't check my notes on others.)
Yes, I said, from VCSELs we know we can use cavity effects to suppress the usual "unavoidable background noise." But to do it, we need proper models for systems design. The usual infinite resrevoir of noise model, which Carmichael refers to, won't do it; it's basically MORE noise. That's why new models ae crucial in these new breakthrough quantum lattice semi-optical computers. (I have to review Rabi-Hubbard as one small part of this, as Princeton folks taught me. And of course work by Glauber and Scully on noise, the theme at Princeton.)
Many of the folks we used to fnd in electronic quantum modeling at NSF disappeared to the EU, because of the great funding there, albeit more near-term industry market oriented.
There were other interesting offline conversations in both places, but must run.
At some point, I may need to write a paper on the type of quantum NP hard quantum Boltzmann machines which easily become available if my MRF models are vindicated. Also more on the x y z aspects of some types of triphoton experiments. For later.
No comments:
Post a Comment