Friday, July 23, 2010

fly stories and understanding the brain and mind

A professor of artificial intelligence recently posted his thoughts about what we could learn from a story on the fly brain. My response:
Like Don, I agree strongly with the main points John is raising here.

How to achieve a better understanding of intelligence and the mind is a central goal of this community,
and it's good that he has raised these points.

Is it good to get into fine points and refinements? Maybe... at least for some of us....

John said:

>.......I'd like to cite the following item that came up recently:
Fly's Brain -- A High-Speed Computer: Neurobiologists Use State-of-the-Art Methods to Decode the Basics of Motion Detection
>This research illustrates some interesting points:
>1. Many theories and models that were formulated a half century or more ago could be and have been quite accurate:
"Back in 1956, a mathematical model was developed that predicts how movements in the brain of the fly are recognized and processed. Countless experiments have since endorsed all of the assumptions of this model."

There have been many examples in the past few centuries of mathematics
and mathematical insights developed many decades before it was used,
understood or appreciated by domain experts. This is certainly an important point
these days...


2. But many features of such models could not be tested against the available neural evidence: "We simply did not have the technical tools to examine the responses of each and every cell in the fly's tiny, but high-powered brain."



The specific example of the fly is interesting in many ways.
It is hard to refrain from telling several very amusing old stories...

But more relevant: in 2008, when we had discussions all over the Engineering Directorate
at NSF on the new initiative on Cognitive Optimization and Prediction (COPN)... we decided to
focus on .. learning... in VERTEBRATE brain. The real target was to understand how
the brains of the smallest mammal, the mouse, could be as powerful as they are in learning to predict
and to make effective decisions (or control). But lower vertebrates were very much included, because they offer a kind
of logical progression of design important to understanding or replicating the mouse eventually. Understanding
the progression of design is really important to understanding the design principles.

So why didn't we include invertebrates?

This is an important question which has been debated at VERY great length.

At a big workshop across ALL NSF, people studying really simple organisms like aplysia and nematode
argued that we should master them first before even trying to do anything at all with vertebrates.
There were a few typical appeals in the spirit of "give all the money to us and none to them."
There were especially interesting appeals to study taste and digestion in the context of lobsters...
(Am glad that Senator Grassley wasn't there...)

The aplysia people made the good and serious point that a lot of the biochemical mechanisms used
in vertebrates can be found in aplysia as well, and that aplysia might be a good place to learn about them.
This is an important and reasonable point. For the biology directorate, which has a responsibility to
understand these biochemical mechanisms, it is certainly justified to invest some money in these areas,
as one part of their portfolio, in order to develop understanding which could be used later in the more
important task of understanding the biochemical aspects of what we see in vertebrate brains. But for
us... the full challenge or end target of understanding the mouse is in sight clearly enough now that there
is no excuse for ignoring it. The sad fact is that there is a good amount of money out there today (as there should
be) for aplysia and nematodes, but, now that there is no money for NEW COPN projects, there is essentially no funding for directly addressing the bigger target. Aplysia simply aren't intelligent enough to be good testbeds for
the fundamental issues in prediction and optimization that we could and should be pursuing more directly and effectively.
We are in a situation of gross unmet opportunity.

As we are in many other areas, like energy.

Flies, however, are not aplysia. They present a very different set of questions.

Those of us who really want to achieve a crossdisciplinary understanding of mind and intelligence
should include the classic work of Bitterman on the list of must-read foundations. At least, his
Scientific American article from the 60's, which is easy enough. Bitterman did a great first cut at
explicating the qualitative progression in types of intelligence from fish (and lamprey?)
up to mammal. Curiously enough -- in his later work (some in Science in the 70's), Bitterman found that honeybees
scored as high as mammals in some experiments that reptiles couldn't quite handle. Whatever we make of this...
it raises the question: "Is there a SECOND whole progression here that we could learn from?"

The fly brain was discussed in some detail in a workshop sponsored by Microsoft at the University of Washington,
around 1998, run by Chris Diorio.

But -- here is the problem. There are some very large neurons in the fly (and other arthropods) which are easy
to access, but do not provide the kind of throughput and complexity that could explain the kind
of things Bitterman was talking about. Whatever high-level intelligence is there seems to depend on things
called "mushroom bodies" which are much HARDER to access than neurons in the mouse. Huge numbers
of very tiny neurons....

Given a VERY limited budget for COPN, we felt it was better to "put all our eggs into the one basket,"
and focus on vertebrates. We said people could submit proposals using insects as testbeds, BUT ONLY
if they could convince reviewers interested in vertebrates that the work would really help more
than the competing proposals.

**IF** there had been NIH-style hundreds of millions available, it would have been rational to diversify the portfolio more,
and maybe even throw in an octopus or two...

(Stories I refrained from telling: the beautiful blond and the fly brain; the flyas an attack and evasive military vehicle; Microsoft meets the fly.)


3. New technology can test the assumptions about previously unobservable features:

"Although it seems almost impossible to single out the reaction of a certain cell to any particular movement stimulus, this is precisely what the neurobiologists in Martinsried have now succeeded in doing."



This was certainly part of COPN as well. More could be said, but maybe not here and now.


4. But every new discovery opens up even more questions: "Just how much remains to be discovered was realized during the very first application of the new methods."


Again - agree strongly.

There is a tendency for some folks to despair at some level, because it seems there
will be no end point... and hence no pot at the end of this rainbow.

But I would claim (and have various papers out there...)... the principles are in hand to
make it possible to understand and replicate that level of general purpose intelligence
we see in the brain of the smallest mouse. At a certain kind of philosophical level,
we are "already there"... but we certainly haven't proven we know enough to actually
provide a working model of this kind of general intelligence. But we can. How far we are from that target depends on us.
As it does with certain energy technologies
and space technologies, which we might have in 5 years or never, depending on what we do.
It's sad that the present trends are towards never. And, as Don has pointed out, one of the forces
pushing us towards "never" are the forces who pretend we are already there...
who overhype and oversell the widget they have plugged in this week...


5. And the size of the problem remains immense: For the "fly's motion detection... one sixth of a cubic millimetre of brain matter contains more than 100,000 nerve cells -- each of which has multiple connections to its neighbouring cells."


To physically build the equivalent of a mouse brain learning system requires both the
physical hardware and the architecture/algorithms. IBM (from work under Todd Hilton's area at DARPA and their Blue Brain stuff)
has claimed they are already up to the cat brain in terms of hardware. Even throwing some broad error bars
around that... I would claim that the understanding for the algorithm/architecture side is the real bottleneck.

What's more... from a VERY large viewpoint... many of us believe that the understanding is what's most important here
(along with some applications that could be done with brains much smaller than a mouse).
That's a value judgment. It's the kind of judgment which depends on us being able to act smarter than a mouse.

I certainly believe that such research is important. But note that the older theories and models enabled neuroscientists to focus on the critical issues and ask appropriate questions that could be answered with newer technology.
Older work other than mathematics -- we still have a lot to learn from
folks like Pribram and Freud, and there are a whole lot more.



What I want to emphasize is that the problems of cognitive science are so complex that no single branch can solve them by itself. The combined insights from different branches -- psychology, linguistics, philosophy, artificial intelligence, and neuroscience -- are all important, and they must be reconciled with one another.


I would add neural networks, engineering, mathematics, statistics and operations research to the list as well. (And of course "psychology" is not just one thing...)

Best of luck,


Wednesday, July 7, 2010

quantum time and energy 101

Quantum Time and Energy 101

On discussion lists of energy, I recently mentioned a patent application I filed a few days ago about a possible new energy source, linked to new developments in quantum theory.

One my friends on the list asked: "I heard of Schrodinger’s Equation, but not Schrodinger’s Cat. Where can I get the whole story, from textbooks to the latest issues?"

My response:

In fact… it has been a long story, and much has never been pulled together in one place, except perhaps for specialists. I have been lucky to be physically present at many steps of this history, so I have some duty to give that kind of overview myself. It is very hard not to give a lot of details and explanations, but I will try hard to simplify. I will focus on the core issues, not the huge body of spinoffs from philosophy to engineering, except as they feed back to the core. Just a little humor for mnemonic purposes – but do not underestimate how amusing the true story has been.

I. The Classical Era.

Classical v1. The Lorentz picture (circa 1900). All of objective reality consists of atoms, electrons, and the electrical magnetic field, existing in three-dimensional space. To specify the state of reality, specify: (1) where in space each atom or electron is located, its velocity and its angular momentum; (2) specify a single real number V(x) at each point x in space, where V is just the voltage at that point in space; (3) specify THREE numbers (B1(x), B2(x), B3(x) ?) at each point in space, to specify the state of the magnetic field. Knowing the state of reality, and applying Newton’s Laws and Maxwell’s Laws, we can in principle predict the entire future history of the universe, starting from that known state. They thought.

Classical v2. The Lagrange/Einstein picture (circa 1920). Get rid of the Greek-style point particles, and do it all as fields. The electron is just a vortex or wave or pattern or soliton in an additional force field psi(x), governed by Schrodinger’s original equation. Gravity does not obey action at a distance; it’s mediated by another field g(x), which is just a four-by-four matrix of numbers at each point x. Atoms and other stuff are just patterns in some other set of fields which I will call phi(x) right now. All of objective reality consists of continuous fields. If we define PHI as the set of numbers V, B, g and phi, then we can specify the state of objective reality simply by specifying PHI(x), the set of these numbers, at every point x. To predict the future state of the universe – we use a set of dynamical equations called the “Lagrange-Euler” equations. No more Newton-style action at a distance. As the stock market rose, so too did their hopes of filling in this program. (They hoped to learn just what numbers we need to specify phi, and to learn the complete Lagrange-Euler equations to cover all the fields.)

II. The Copenhagen Era

1. THE GREAT CRASH. Schrodinger’s equation works brilliantly to predict the colors of hydrogen, an atom with just one electron. But it fails completely to predict the colors of helium, an atom with two electrons. In the mid 20’s -- someone REINTERPRETS Schrodinger’s equation by solving for psi(x1,x2), where x1 represents the location of the first electron and x2 the location of the second electron. This makes no sense in the Lagrange-Einstein picture… but it works, creating a shock that physics has yet to fully recover from. (DeBroglie was sending me letters about it in the 1960’s.) Also, it’s a real problem even today for electrical engineers, trying to predict where a million electrons are likely to go, when quantum mechanics wants them to solve for a function psi of three million variables.

2. THE REICH: Heisenberg appears. While Schrodinger, Einstein and DeBroglie all reel in shock, Heisenberg and followers point out that this fits what he has been saying all along. He proposes that the complex number psi be reinterpreted, not as a field, but as a kind of “wave function of information,” representing our KNOWLEDGE of the universe, and NOT the objective state of reality. Sic transit objective reality. Many German existentialists and yogins join the bandwagon.

2a. People often say that the “First quantum mechanics” was this new use of the Schrodinger equation to describe electrons (and protons and other such particles), in accord with Heisenberg’s recipe. The “second quantum mechanics,” or “quantum field theory” (QFT), was invented in the 1950’s, and extends quantum mechanics to account for “everything” – not only particles, but electricity and magnetism and the newly discovered nuclear forces. (Ooops – what about gravity? Not for today.) But

that’s not the whole truth. Heisenberg was writing about QFT from day one. In the 1950’s, people figured out how to actually make it work, more or less – to give well-defined predictions (probably well-defined) for the case of charged particles, electricity and magnetism. The four people were Julian Schwinger, Richard Feynman, Tomonoga and Dyson. (I was a student of Schwinger…)

2b. Heisenberg’s picture, aka “the Copenhagen picture,” still taught as dogma in many places today (especially in introductory courses):

2b.1. Get rid of those fuzzy fields, and go back to particles, at the foundation level.

A possible “configuration” X of the universe is defined simply by specifying the location and a few discrete state variables (like “spin up’ and “spin down”) for all the particles in the universe. Let X be a possible configuration of the universe…

2b.2. But there is no real universe. There is only our mind, our consciousness. The rest is illusion. There is only a recipe for how to make predictions. It is a three-step recipe: (1) Follow our encoding or setup rules to translate your knowledge about how you set up the experiment into psi(X(t-)), your knowledge about the time t- when your experiment starts; (2) use the NEW SCHRODINGER EQUATION

psi-dot = i H psi to calculate psi(X(t)) for later times t, to map your knowledge about time t- into knowledge about later times; (3) use our “observer formalism” or “measurement” rule to predict the PROBABILITIES of POSSIBLE outcomes of the experiment. The rule can be written as

E(m)= (psi-transpose)M(psi-transpose), where E(m) represents the expected value of the quantity m which you measure at the end of the experiment, where the wave function psi is interpreted as a kind of vector in an infinite-dimensional space, and M and H are matrices over that space. Anyway, you can see that it’s a mess. The recipe IS the theory; it’s all there is – or rather, all that isn’t. By the way, “psi-dot” simply refers to the derivative of psi with respect to time. To actually use this recipe, we have to add some kind of theory about what the matrices M and H are; a key achievement in the 1950’s was to find a matrix H which works for electricity and magnetism and charged particles.

2c. I later met Heisenberg’s boss by accident, on a metro train to the DC zoo, as we both went to visit some pretty hairy people. He said that Heisenberg didn’t really believe in that measurement rule, or in the three-dimensional universe it predicts, but such crutches are needed to get the attention (and funding) of the deluded souls who think that there is a three-dimensional universe in which to make measurements. In his view, the Copenhagen folks were just deluded popularizers. One could actually characterize my new stuff as a way to implement his true viewpoint (i.e. just getting rid of the observer formalism), but that’s not how I got there.

3. TRIUMPH OF THE NEW ORDER. Is it possible to make sense of this mess somehow? No it’s not, said Heisenberg; that’s the whole point.

Even the “first quantum mechanics” was a mess, but it was the only thing which worked in predicting the colors of helium, and many other things which followed.

Thus in 1926 there was a great Congress of at Saclay, as important to physics as the Continental Congress was to the US. Instead of a declaration of independence from Britain, it came up with a declaration of independence from that old classical idea of objective reality. Since this was the only recipe which worked… it became dominant across almost all of physics for a long time to come.

People have sometimes asked me: ”Does any of this make any difference? Isn’t this just the same old thing – if a bird signs in the forest, but no one hears it, did it really sing? People can look at either way, big deal.” No, it’s not the same old thing. The idea that the bird was really there is not tenable, says Heisenberg. The notion of objective reality simply does not work, empirically. We have to give it up. We cannot explain why the recipe works, in terms of objective reality, because there is no objective reality there to explain it in. It cannot be reduced to such a three-dimensional way of thinking.

III. The Free French and Other Resistance Movements – Up to Von Neumann

Albert Einstein, Ayn Rand and VI Lenin all wrote passionate manifestoes objecting to the new order here, and calling for a return to the concept of objective reality.

Schrodinger was truly aghast at what had been done to his beautiful equation, and expressed concern for the damage that might be done to human sanity as a result of the new order. (One of his best students gave up physics, became a monk, and taught at Georgetown – where I have heard him give his lectures.) DeBroglie also led a center of resistance for a long time.

Einstein’s initial response had two parts:

1. The philosophy itself was objectionable. After centuries of making progress in trying to understand objective reality and how it works, why give up and drown in solipsism?

2. More positively – we COULD explain why the recipe works, and come up with a more realistic understanding if we knew enough and tried harder…. The “psi(X)”

looks a lot like Pr(X), a probability distribution, and maybe this recipe could be explained as nothing but an emergent statistical outcome of a “Classical” type of theory.

Thus for many decades, many top people worked on trying to find such a realistic explanation. De Broglie and Vigier, Bohm and Einstein, Norbert Wiener, Wigner and others, all did important work. They all achieved important insights, some useful in other areas. (Certainly engineers still use mathematics developed by Wiener and Wigner for this purpose. I tend to view Glauber’s P and Q mappings, used in quantum optics, as a kind of byproduct of Wigner’s work.) But it began to seem more and more like a hopeless quest for a Holy Grail, or like making war with a cloud.

Until Von Neumann. Many people (including me) still view Einstein’s friend and colleague, John Von Neumann, as the number one mathematician of the twentieth century – even though he did not live up to some folks‘ standards for purity and chastity and other symptoms of autism. He was also perhaps one of the saner public figures of that century. (Not that Feynmann was chaste in any respect. Von Neumann did not go to THAT extreme.)

Von Neumann made two really essential contributions to understanding this stuff.

First, he analyzed the whole idea of an observer formalism in a more logical way than others had before him. He asked “obvious” questions like – who observes the observers? What if there is a chain of experiments within experiments? Do cats or birds qualify as observers? Do humans? His insights were quite important, and directly related to real experiments today, but in a quick overview I’ll have to skip that part.

Second, he used a crucial mental skill of mathematicians which the larger really world needs to understand better. Mathematicians use the term “reduction ad absurdum” … but I’ll to give a feeling for it in simpler terms. It often happens, when you want to do something really hard, that your best chances comes from trying to really rigorously prove that it’s impossible, under the broadest possible assumptions covering everything people usually try – and then USE THE LOOPHOLES, the limits of the assumptions, to figure out how to actually do it!

(And if that seems to be hard, broaden the assumptions to cover the first set of new things.)

In a classic book (which I cite in some of my published papers), Von Neumann proved that it would be impossible to exactly reproduce all the predictions of (Copenhagen) quantum theory starting from any reasonably behaved realistic model. The key problem, he said, is with the usual form of the CAUSALITY assumption. That’s what we need to work on. It’s a shame he never really had a chance to do this.

IV. Princeton’s Revenge: Many Worlds Physics, The First Really Major Reformulation of Quantum Field Theory (QFT)

Shortly after the deaths of Einstein and Von Neumann, their colleagues at Princeton published easy “already solved” ways to solve problems which had disturbed them right to the end. John Wheeler got the Nobel Prize for developing consistent Lagrange-Euler equations to combine gravity, charged particles, electricity and magnetism. (That’s part of Classical.v2, back in Section I.) Hugh Everett III,

a graduate student working under Wheeler, developed a new formulation of QFT

which has become ever more popular through the years.

Everett’s idea was basically very simple. If you look back at section II.2b, the Copenhagen recipe, why not simply throw out the setup and observer formalisms,

and just keep the new Schrodinger equation: psi-dot = i H psi? Instead of trying to explain the WHOLE recipe as a kind of outcome of statistics, why not bite the bullet and declare that psi is a real, true field? Why not say that the universe we actually living is the multidimensional space of possible vectors X, which Von Neumann called “configuration space”? And then, derive just the observer part as a statistical outcome of that new theory of the dynamics of the universe. That derivation was the main part of his PhD thesis. His thesis became widely available in a book edited by DeWitt on Many Worlds Physics, from Princeton University Press.

Everett argued that we can definitely go back to the idea of objective reality – but only at a price. The price is that we have to accept the idea that the cosmos we live in is much larger than the small three-dimensional slice of it that we see every day.

For many years, most people assumed that the difference between Everett’s theory and the Copenhagen theory was just too small to measure. DeWitt showed that there is SOME difference – but if it can’t be measured, it’s really just a matter of interpretation. “If they both lead to the same predictions of nature, who cares?”

Many philosophers have questioned whether Everett really proved what he claimed to have proved, and tried to do better. In my view, none of them really proved much.

Years later, David Deutsch of Oxford showed how the way of thinking in the many-worlds model could actually be used as a way to develop new technology. Parallel or multicore processing lets us compute things that old-style serial computers could not do, plodding along one instruction at a time in a single thread of computation. Why not do still better by mobilizing large numbers of parallel universes within the larger cosmos, to perform a computation? Deutsch was the real father of the modern approach to quantum computing, which derived from his papers exploiting this approach.

In another strand of work – some people re-examined the older ideas of DeBroglie and Bohm, and appreciated that they could not work without adding functions of more dimensions. That provided a backdoor way to reinvent the many-worlds approach. So far as I know, Everett’s version is the simplest and most viable, but there is recent work by Hiley which I have yet to study, in the same general category.

V. Bell’s Book and Candle: Back to an Experimental Approach

Many people viewed Von Neumann’s work as a proof that quantum mechanics never could be explained the way Einstein wanted to. But diehard realists pointed to a certain gap in the logic here. Von Neumann proved that a traditional classical model could never replicate ALL the predictions of quantum theory, but no one has ever tested ALL the predictions of quantum theory. They asked: can we come up with a SPECIFIC experiment, where we can prove that quantum mechanics predicts one thing but all traditional classical theories would have to predict something else?

This challenge was finally met by Clauser, Holt, Shimony and Horne (CHSH), who also performed the first experiments to follow up on this. In the spirit of modern physics, their theorem is usually called “Bell’s Theorem” and the experiments were commonly termed “Aspect” experiments, after Alain Aspect. In his seminal book, The Speakable and Unspeakable in Quantum Mechanics, J.S. Bell cites the original papers of Clauser, Holt, Simony and Horne. He provides some important new insights, but also puts his own spin on this subject. I was lucky enough to be the same place as Holt as we were both getting our PhDs, and I saw the original papers long before I saw Bell’s book.

The CHSH theorem proper defines a class of experiments. If quantum mechanics correctly predicts these experiments, one can rule out ALL theories of the universe which are “local, causal, hidden variable theories.” Those are their words.

If you go back to the original source, their papers, you will see that that’s what’s actually proven, not any of the garbled versions that sometimes appear in popularized accounts. The theorem and the experiments only ruled out theories which have ALL THREE properties. Thus to understand what’s really going on here, it’s crucial to know what these three properties are.

By “hidden variable,” they basically just mean a theory which assumes SOME kind of objective reality. The Copenhagen theory doesn’t, so it’s automatically OK by this theorem. But you can’t have a theory of physics which fits these experiments

and talks about objective reality unless it violates one of the other properties – “locality’ or “causality” or both.

By locality, they mean “no action at a distance” in ordinary three-dimensional space. The many-world people and their allies say that they are still allowed, because their theories do allow action at a distance. There are many, many papers on “nonlocality” as an alternative to Copenhagen.

Back in 1973, I pointed out that the orthodox formulation of “causality” can also be revisited. I was not aware of Von Neumann’s discussion of the same point, but in any case I took the point further.

Years ago, there was some reasonable debate about how these Bell’s Theorem experiments actually turned out. In some cases (like Holt’s original experiment), it’s not entirely clear to me that the results are consistent with Copenhagen. But there are certainly some modern set-ups, like the experiments of Yanhua Shih, where it is very clear that local, causal hidden variable theories ARE ruled out. It is good enough to have ONE decisive, replicable experiment to rule them out.

Therefore – to come up with a theory of physics which fits experiment, and is based on an idea of objective reality, the theory MUST violate locality or conventional time-forwards causality or both. Those are the only possibilities.

In particular, to resurrect the Einstein program, it is necessary to violate traditional ideas about time-forwards causality.

VI. Backwards-Time Physics: The Latest Chapter

The latest installment of this story, in 2008, is spelled out in my paper in the International Journal of Theoretical Physics, posted at There are several key points it makes and argues in detail:

1. The world of optics and electronics, including quantum computing, has already performed decisive experiments which rule out orthodox Copenhagen physics. It is not just a matter of interpretation.

2. In the many worlds theory, the new Schrodinger equation is basically symmetric with respect to time. We cannot correctly deduce an observer formalism which is grossly asymmetric with respect to time, if we start from assumptions which are time-symmetric! Thus we must logically give up Everett’s attempt to do this. The only way to explain the practical success of the observer formalism, in most cases, is to invoke BOUNDARY conditions – to invoke the forwards flow of free energy used in all of our experiments so far.

3. If we adopt the view that the laws of physics are completely symmetric with respect to time, except for these boundary conditions, it is easy to reconcile the CSHS experiment with the Einstein/Lagrange program. In fact, the highly precise experiments performed by Shih were actually designed based on ideas from Klyshko, who used a backwards-time approach to optics, fully accounting for entanglement effects in three spatial dimensions.

4. None of this proves or disproves the possibility of building a bidirectional power supply, as described in But we do build a system which should be able to generate electricity from ambient infrared heat radiation,

according to Copenhagen or time-forwards physics, then my new circuit, the quantum separator (QS) would provide a very graphic and decisive experiment, to decide between classical time-forward causality and backwards-time physics. As well as a new energy source. The logic in this paper makes it very clear how that experiment should be expected to come out, but a strong experiment would still be very helpful in making it clear where we now are today.

5. The paper did not really take sides on the issue of many worlds versus the Einstein picture. It mainly argues that either version of backwards time physics dominates over older versions of quantum theory.