Thursday, May 13, 2021

tQuA: Explanation and Capabilities of a Radically New Type of Quantum Computing

 Recently I developed a new approach to Quantum Annealing, which I call true Quantum Annealing (tQuA).

Tomorrow, we will have a Zoom discussion with IT people on the details. 
=================================
THE BIG PICTURE (GENERAL CONTEXT):

Quantum Information Science and Technology (QuIST) is one of the four core areas calling for further R&D, to address the
risks and possibilities of massive changes coming in the internet in general:
This would fall under the new internet effort which www.millennium-project.org has been pushing, if it ever becomes a real S&T effort. 

The United States government is putting massive investment into ONE TYPE of quantum computing, the Quantum Turing Machine (QTM), originated by the great British physicist David Deutsch. He appears as a major figure in history, in my previous talk to this group about the biggest big picture (https://www.youtube.com/watch?v=jfHBO_uuRyE. But there are FIVE types or levels of quantum computing open for development:

The QTM gives us capabilities similar to an ordinary Turing machine, like a sequential computer, but with thousands or trillions of parallel computing streams to do the job. True Quantum Annealing (tQuA) harnesses the same power, to solve ANY optimization problem, which may even include optimization of a physical plant like a chip or a radio telescope or an airplane, or drug discovery. For example, "quantum bromium" (tQuA applied to detecting backdoors in chips) may be able to find backdoors a thousand or a trillion times as fast as ordinary "sandbox" bromium security systems. 

Crudely speaking, the QTM is a kind of parallel computing. Instead of just building a thousand "cores" on a chip, to do a thousand jobs in parallel, Deutsch showed us how to do thousands or trillions of computing jobs in parallel, on ONE chip, by creating thousands or trillions of copies of the chip in quantum superposition with each other. If a QTM has ten "qubits" or forty "qubits", that means that it can harness 2^10 (about 1000) or 2^40 (about a trillion) copies of the same chip in parallel. A true quantum annealing system (tQuA) could achieve the same power, for solving optimization problems of all kinds. 
Unfortunately, Canada's famous DWave QuA system does not really do that. However, the underlying physics says that we CAN do that, and here is how.

[The link on Sustainable Intelligent Internet (SII) at the top of this post goes to a list of links.  At the bottom of the list is section on Quantum Information Science and Technology (QuIST), a term used in the past by the interagency Quantum information Science Coordinating Group (QISCOG), on which I was once the NSF representative before my retirement. The US funding efforts were broader in scope until Howard Brandt (the real thought leader for QISCOG) died suddenly two weeks before his planned announcement of a new direction.]

=====================

WHAT IS true Quantum Annealing (tQuA), versus DWave (DQuA)?"


MY CONCEPT of true Quantum Annealing (tQuA) turns out to be very different from the RELATED but weaker concept in the present literature on Quantum Annealing. 


More precisely: a Basic Quantum Annealing (BQuA) box would include: (1) an array of N "flux qubits", N binary processing elements, call them u(j) as j =1  to N; (2) a simple standard "Ising type" hamiltonian, H, such that the wave function psi( the vector u) evolves according to a term linear in u and a term of the form Hjk*uj*uk, where the USER gets to specify the coupling constants Hjk; (3) a way to initialize the u at a start time t0; (4) read out and coupling to the external world.


BUT THIS concept of BQuA includes BOTH the version of quantum annealing which is now common in the literature

(e.g. the vast literature available for the company D Wave,

 A numerical implementation of" quantum annealing"

B Apolloni, D De Falco, N Cesa-Bianchi - 1988 - cds.cern.ch), AND a useful special case of what I call true Quantum Annealing, what I assumed in my review.


I was shocked to learn that THEIR variant of BQuA is not a true quantum computer at all, but that it is very straightforward how to modify that kind of design to produce a BQuA box which IS a true quantum computer, and permits a wide range of important capabilities. For example, one can hook it up so as to probe for backdoors in chips one buys from risky sources, like the hardware backdoors which recently led to the biggest, most harmful compromise of US government data systems like the OPM personnel database. 


I was indeed stunned to learn that the DWave community and its followers have not yet figured out the simple things needed to make it work as I expected it to work. 


In essence, the DQuA (what they do) starts from a user-chosen vector u, and orbits AROUND the space of possible values for the vector u, hoping to use quantum tunneling to get from regions of low energy H to regions where some states have lower energy. (Yes, they have lots of details.) They use "annealing" by assuming a COUPLING constant multiplying the term for interactions across bits, and dialing the coupling up or down. It sounds like an electron IN ONE state, tunneling at times to get to a better state.


But true quantum computers, like Deutsch's universal quantum Turing machine, isn't like that. A true quantum computer is based on QUANTUM SUPERPOSITION, like the original Schrodinger cat. Deutsch, a major developer of the multiverse idea, described it as a kind of parallel computer, harnessing the power of parallel universes to perform different computations IN PARALLEL , and pick out what you choose in the end.How many parallel universes? Well, with 20 qubits (Q=20), it is slightly more than  2"20 universes in parallel, a million. BOTTOM LINE: there are lots of working Deutsch style quantum computers working out there now, some with many more than 20 qubits. This is not science fiction.This is today's working technology. (You may ask what good it is, but that's another subject.)


A true QuA box should be able to manage Q real qubits,not just "flux qubits" which may or may not be real qubits, in parallel. Thus a classical simulated annealing or stochastic search  box should be able to find good values of a vector u, to minimize some function H, but a true QuA box should be able to search through a million times as many possibilities if Q is 20. 


How could such a thing be possible?


To begin with, the hardware to implement u should be the kind of hardware which is used to hold qubits in Deutsch style quantum computing. I can think of four good candidates. DWave uses something SIMILAR to one of the four, the SQUID, first used in Maryland for quantum computing and widely used in China under Pan Jianwei. But DWave's is only similar; they set it up in a way which leads to far more noise than necessary, but that is not the main problem.


The main problem is that they don't seem to understand what ANNEALING is, what any real condensed matter physicist would know. (I have cited Chaikin and Lubensky many times.) 


One major ADVANTAGE of tQuA over quantum Turing machines is that decoherence is not a problem, in principle; losing information through noise is not a problem, when we don't want to perpetuate initial condition information anyway. 

BUT: in true annealing, COUPLING TO THE ENVIRONMENT TO SHED HEAT is the essence of how the system state u goes to minimum energy. It is not about adjusting coupling coefficients, but about shedding energy to the environment. If one can do that, thermodynamics is what does the selection.


But this discussion leaves out one more crucial aspect of any optimization system. How does one define what is to be optimized, what the user WANTS the system to accomplish?

=======================================


THREE LEVELS of tQuA (LEVELS OF USER NEEDS TO MEET): 


There are three possibilities. 


Level 1: Endogenous scoring:


The easiest to implement is ENDOGENOUS SCORING, where the user's goal is simply encoded into the coupling coefficients Hij, and that's it. A quadratic optimization system. A fancy quadratic program? 

Of course, that can be done ITERATIVELY, where an optimum is calculated, evaluated against more complete nonlinear standards, and the optimization problem revised, until the user is happy. DWave certainly has worked hard to develop that kind of interface


Level 2. Internal Coupled Scoring:


Second choice is INTERNAL COUPLED SCORING, where

a process inside the big computer system scores each possible u, and the overall score is fed back to the QuA box, perhaps to a set of power utility input bits. In truth, new papers are needed spelling out how the thermodynamics of this works. 


Level 3: QuATh):


Third is Quantum Annealing of Things (QuAth), which is essentially the same except that the scoring system need not be a computer program but may be an external object, like a chip to be assessed. A macroscopic Schrodinger cat. 

================

Hardware Choices Which Should be Pursued In Parallel:


I NOW see the importance of the three stage approach to developing and building real QuA technology. But even stage 1 requires considering WHICH ACTUAL hardware platforms to use. I would claim that a rational program would not choose just one, but should juggle what can be learned from FOUR parallel coordinated efforts learning from each other. These would be:

true QuA based on quantum optics platforms (also used in some QTM work)

Squid platform, which DWave uses, but more like what TM uses

Massive quantum dot arrays, like Samsung TVs with billions of optical processing elements in a square meter or less

High frequency electronic systems, like what is used in GHZ electronics for cell phones 


Developing better component models is VERY crucial to better design capability.  Yeshua (Howard Carmichael's group in New Zealand) is closer to some of the experiments needed to get better component models than I am, but the link to Werbos and Fleury at ResearchGate also has crucial information. 

How could you and I be macroscopic Schrodinger cats if the cosmos MIGHT only be 4D underneath? It seems our states are like the photon states in that paper. But for now, the Deutsch physics are good enough, in most respects. The thermodynamics of the interface are a tricky issue, to be resolved by empirical work in the end. 

==================================
===================================
What are OTHERS already doing about quantum computing for optimization?


Exactly after I typed this yesterday, I received an email from Berkeley inviting me to THEIR two hour Zoom  on QuIST for optimization! This was supported by the Simons Foundation, which many view as the US best hope for real math in any computer topics. One was in a familiar office in Maryland. Here is the invitation and my reactions to what I saw:
===============

1:00 am – 12:00 pm: Quantum Algorithms for OptimizationRonald de Wolf (QuSoft, CWI and University of Amsterdam) |  Abstract

12:00 pm – 12:30 pm: Panel Discussion on Quantum Algorithms for OptimizationAndrew Childs (UMD), Eddie Farhi (MIT/Google), Ashley Montanaro (U. Bristol), Umesh Vazirani (UC Berkeley; moderator)

This colloquium series will feature talks by some of the foremost experts in quantum computation in the form of "an invitation to research in area X". With the explosion of interest in quantum computation, there is a dizzying flurry of results, as well as a diverse group of researchers who are drawn to this field. This colloquium series aims to target three audiences: 

  1. Experts in quantum computation: It is increasingly difficult for even experts to keep up with the results in adjacent areas. These colloquia will aim to identify the key results and techniques in the area, as well as the most important directions for further research. 
  2. Interested researchers in (classical) theoretical computer science: There are deep connections between ideas in quantum computation and classical complexity, algorithms, etc. These colloquia will make these connections more accessible to the broader TCS community. 
  3. Interested mathematical and physical science (MPS) researchers: These colloquia will enable MPS researchers to cut through the clutter to make connections to CS style results in quantum computation. 
Public Zoom webinar link: https://berkeley.zoom.us/j/95040632440
==================================

These folks were certainly selected to be the world's best in their area. (And it was truly international.)
HOWEVER, it was mainly world's best on how to use a QTM to solve optimization problems. They spent a fair amount of time on backpropagation, the new gold standard, but do not understand the basic math of backpropagation, and seriously missed what can be done even with a QTM. Their bottom line: they are far away from even linear or quadratic speedups in optimization by using quantum methods, with one narrow exception which the lead speaker was far from. 
There is an Arora-Kale SDP optimization which was designed in COLLABORATION with real physicists using tricks which the usual QTM computer scientists do not understand, outside their paradigm. I suspect it is a special case of QuA.

The lead speaker says only three real net benefits from quantum computing look real now -- cryptography, quantum simulation and optimization. The Maryland guy said no not optimization yet, and the lead speaker apologized and agreed.

And so: tQuA is just as solid, and beyond the US mainstream, as I suggested earlier today. Actually, there are also ways to use it to improve cryptography, beyond any QTM system I have seen, but maybe we should save THAT discussion for later, if there is a later. The key barrier has been the need to really understand the underlying physics, IN computation.

Why do they even screw up how to get gradients? My guess: the unwritten rule they must cite their friends, and the friends don't know the math. I am tempted to say more, but... no real need.

No comments:

Post a Comment