Recently I developed a new approach to Quantum Annealing, which I call true Quantum Annealing (tQuA).
[The link on Sustainable Intelligent Internet (SII) at the top of this post goes to a list of links. At the bottom of the list is section on Quantum Information Science and Technology (QuIST), a term used in the past by the interagency Quantum information Science Coordinating Group (QISCOG), on which I was once the NSF representative before my retirement. The US funding efforts were broader in scope until Howard Brandt (the real thought leader for QISCOG) died suddenly two weeks before his planned announcement of a new direction.]
=====================
WHAT IS true Quantum Annealing (tQuA), versus DWave (DQuA)?"
MY CONCEPT of true Quantum Annealing (tQuA) turns out to be very different from the RELATED but weaker concept in the present literature on Quantum Annealing.
More precisely: a Basic Quantum Annealing (BQuA) box would include: (1) an array of N "flux qubits", N binary processing elements, call them u(j) as j =1 to N; (2) a simple standard "Ising type" hamiltonian, H, such that the wave function psi( the vector u) evolves according to a term linear in u and a term of the form Hjk*uj*uk, where the USER gets to specify the coupling constants Hjk; (3) a way to initialize the u at a start time t0; (4) read out and coupling to the external world.
BUT THIS concept of BQuA includes BOTH the version of quantum annealing which is now common in the literature
(e.g. the vast literature available for the company D Wave,
A numerical implementation of" quantum annealing"
B Apolloni, D De Falco, N Cesa-Bianchi - 1988 - cds.cern.ch), AND a useful special case of what I call true Quantum Annealing, what I assumed in my review.
I was shocked to learn that THEIR variant of BQuA is not a true quantum computer at all, but that it is very straightforward how to modify that kind of design to produce a BQuA box which IS a true quantum computer, and permits a wide range of important capabilities. For example, one can hook it up so as to probe for backdoors in chips one buys from risky sources, like the hardware backdoors which recently led to the biggest, most harmful compromise of US government data systems like the OPM personnel database.
I was indeed stunned to learn that the DWave community and its followers have not yet figured out the simple things needed to make it work as I expected it to work.
In essence, the DQuA (what they do) starts from a user-chosen vector u, and orbits AROUND the space of possible values for the vector u, hoping to use quantum tunneling to get from regions of low energy H to regions where some states have lower energy. (Yes, they have lots of details.) They use "annealing" by assuming a COUPLING constant multiplying the term for interactions across bits, and dialing the coupling up or down. It sounds like an electron IN ONE state, tunneling at times to get to a better state.
But true quantum computers, like Deutsch's universal quantum Turing machine, isn't like that. A true quantum computer is based on QUANTUM SUPERPOSITION, like the original Schrodinger cat. Deutsch, a major developer of the multiverse idea, described it as a kind of parallel computer, harnessing the power of parallel universes to perform different computations IN PARALLEL , and pick out what you choose in the end.How many parallel universes? Well, with 20 qubits (Q=20), it is slightly more than 2"20 universes in parallel, a million. BOTTOM LINE: there are lots of working Deutsch style quantum computers working out there now, some with many more than 20 qubits. This is not science fiction.This is today's working technology. (You may ask what good it is, but that's another subject.)
A true QuA box should be able to manage Q real qubits,not just "flux qubits" which may or may not be real qubits, in parallel. Thus a classical simulated annealing or stochastic search box should be able to find good values of a vector u, to minimize some function H, but a true QuA box should be able to search through a million times as many possibilities if Q is 20.
How could such a thing be possible?
To begin with, the hardware to implement u should be the kind of hardware which is used to hold qubits in Deutsch style quantum computing. I can think of four good candidates. DWave uses something SIMILAR to one of the four, the SQUID, first used in Maryland for quantum computing and widely used in China under Pan Jianwei. But DWave's is only similar; they set it up in a way which leads to far more noise than necessary, but that is not the main problem.
The main problem is that they don't seem to understand what ANNEALING is, what any real condensed matter physicist would know. (I have cited Chaikin and Lubensky many times.)
One major ADVANTAGE of tQuA over quantum Turing machines is that decoherence is not a problem, in principle; losing information through noise is not a problem, when we don't want to perpetuate initial condition information anyway.
BUT: in true annealing, COUPLING TO THE ENVIRONMENT TO SHED HEAT is the essence of how the system state u goes to minimum energy. It is not about adjusting coupling coefficients, but about shedding energy to the environment. If one can do that, thermodynamics is what does the selection.
But this discussion leaves out one more crucial aspect of any optimization system. How does one define what is to be optimized, what the user WANTS the system to accomplish?
==============================
THREE LEVELS of tQuA (LEVELS OF USER NEEDS TO MEET):
There are three possibilities.
Level 1: Endogenous scoring:
The easiest to implement is ENDOGENOUS SCORING, where the user's goal is simply encoded into the coupling coefficients Hij, and that's it. A quadratic optimization system. A fancy quadratic program?
Of course, that can be done ITERATIVELY, where an optimum is calculated, evaluated against more complete nonlinear standards, and the optimization problem revised, until the user is happy. DWave certainly has worked hard to develop that kind of interface
Level 2. Internal Coupled Scoring:
Second choice is INTERNAL COUPLED SCORING, where
a process inside the big computer system scores each possible u, and the overall score is fed back to the QuA box, perhaps to a set of power utility input bits. In truth, new papers are needed spelling out how the thermodynamics of this works.
Level 3: QuATh):
Third is Quantum Annealing of Things (QuAth), which is essentially the same except that the scoring system need not be a computer program but may be an external object, like a chip to be assessed. A macroscopic Schrodinger cat.
================
Hardware Choices Which Should be Pursued In Parallel:
1:00 am – 12:00 pm: Quantum Algorithms for Optimization, Ronald de Wolf (QuSoft, CWI and University of Amsterdam) | Abstract
12:00 pm – 12:30 pm: Panel Discussion on Quantum Algorithms for Optimization, Andrew Childs (UMD), Eddie Farhi (MIT/Google), Ashley Montanaro (U. Bristol), Umesh Vazirani (UC Berkeley; moderator)
This colloquium series will feature talks by some of the foremost experts in quantum computation in the form of "an invitation to research in area X". With the explosion of interest in quantum computation, there is a dizzying flurry of results, as well as a diverse group of researchers who are drawn to this field. This colloquium series aims to target three audiences:
- Experts in quantum computation: It is increasingly difficult for even experts to keep up with the results in adjacent areas. These colloquia will aim to identify the key results and techniques in the area, as well as the most important directions for further research.
- Interested researchers in (classical) theoretical computer science: There are deep connections between ideas in quantum computation and classical complexity, algorithms, etc. These colloquia will make these connections more accessible to the broader TCS community.
- Interested mathematical and physical science (MPS) researchers: These colloquia will enable MPS researchers to cut through the clutter to make connections to CS style results in quantum computation.
No comments:
Post a Comment