Saturday, August 18, 2012

questions about my post on stages of intelligent systems

am impressed by a lot of the very intelligent questions you and others have been asking, so I do owe some response,
even though it has to get more controversial at some point. (Not that the last one represents any kind of consensus
position either!)

Roughly, I group them in two parts:

1. "There are certain details not clear to me, from www.werbos.com/Erdos.pdf), about the roadmap
to mouse-level AGI. Can you fill in?"

2. What about human-level AGI, consciousness, and the possibility or reality of AGI at a higher level?

Were I to devote the rest of my life to this area, maybe I would split my energy 50-50 between those
two questions. I mainly advocate focused research in two coordinated parallel tracks, reflecting these two.
(In a way, there is an analogy here to "energy policy." People who look for solutions to
"the energy problem" don't engage so well with reality, because there are basically TWO big energy challenges,
which are far less tightly coupled than most people imagine: the problem of getting to sustainable fuel for
cars and trucks, primarily, and the problem of getting to renewable electricity -- in my view. Car technology
is central to the first of these. Another story.) 1 and 2 here are more coupled, logically, but it's still
important to keep them distinct.

I view 1 as the number one grand challenge to mathematically well-defined science as a whole in the coming century,
as important, clear and discrete a target as the target which Newton achieved in his time. We need to fill
in the details to the point where we have working example of universal AGI systems which capture the main fundamental capabilities
of the cognitive optimization and prediction (COPN) abilities of the mouse brain. (That URL
I posted to the NSF COPN web site gives more details of what I mean here. Those details, unlike these emails, represent some kind of
official NSF position; I could say more about "how official," but that would be a complex digression for now.)
In a more rational world, the COPN topic would be receiving much larger, continuing focused support, with decisions made by people who actually understand
that brief program announcement.

For 1 -- I cited www.werbos.com/Erdos.pdf, which isn't light reading and which was written for pure mathematicians and only deals with the cognitive prediction half, because
the mathematical principles are what really matter here, and because some other papers with more details are mostly
harder to read and less recent. HOWEVER: for those of you who have access to university libraries, my paper in
the journal Neural Networks 2009 says a lot more, and may appear easier to read. Last month, in China, I wrote up
a VERY simple description of the roadmap, for folks who are totally nonmathematical:
http://drpauljohn.blogspot.com/2012/07/kung-fu-style-mind-discipline.html

=============

To explain that a little more, I need to start moving to the question of consciousness, which I will now get more into here.

Because we humans have SUCH powerful vested interests and hot buttons on the subject of consciousness and humanity, it
is no wonder that objective thinking is harder for us in that area. For example, I used to have a friend who would approach and computer
program or AGI design or idea for them by asking: "Ah yes, but if you build it, WILL it be conscious?" I would claim that
the question itself is flawed. It is the kind of question which is so ill-defined that focusing on it too strictly destabilizes your understanding and sets you behind.
The reason is that the attribute "consciousness" is not a binary variable. Likewise, it is not a continuous variable,
a matter of the AMOUNT of intelligence (IQ, as the B.F. Skinner once claimed. Why do I assert that? Because if we look around us
with open eyes, to the world of experience we are actually grounded in, we see LEVELS and LEVELS of intelligence and consciousness,
like a creaky staircase.

For a more complete discussion of that, click on "pdf":

http://arxiv.org/abs/q-bio.NC/0311006

That book chapter was a plenary talk at the first international conference on consciousness, held that year
in Tokyo; it goes much further than the talk by Chalmers, but is generally consistent with what Chalmers said.
Greenfield attended, and has since said similar things (and spent more time on the "consciousness circuit.").

That's rough... but the roadmap to mouse-level AGI in my view involves four stage:

1. "Vector intelligence"
2. "Spatial complexity intelligence," which mainly exploits symmetry principles as possible
with ordinary parallel hardware in order to better approximate what you may call the Solomonoff prior
3. "Temporal complexity intelligence," which fully exploits multiple time intervals in life
4. "Mouse-creative intelligence," which uses a kind of spatial cognitive mapping to
solve a certain aspect of the "local minimum" or creativity problem.

Cartoon caricatures of 2 and 3 are long familiar in AI -- "spatial chunking" with
"world modelling" and "temporal chunking." But the math to make it work in a general way in a learning
system requires a lot more than cartoons and clever hacks. Also, the use of mathematics does NOT
require an assumption that "the mind equals the brain;" mathematics is far more universal than that. Those of you who
know what a Pythagorean is might reasonably put me in the pigeonhole.

===================

But then what of 2, AGI beyond the mouse? And where is the human in this?

In both of my recent single-authored papers in Neural Networks, 2009 and 2012, I have the same color figure with a mountain on
it which depicts all these levels.

Beyond the mouse level, the next big level up is pictured as the "semiotic intelligence," the kind
of brain or mind which makes full use of symbolic reasoning. So far, that fits with what most
people believe about the world around them.

Then comes zinger number one. Konrad Lorenz once said something like:
"I spent my whole life looking for the missing link, the half-man half-ape creature which
has not yet really achieved sapience. And now, Eureka, I have found him! He is all around me, the
normal modern human walking the streets of any major city."

That's basically my position. If semiotic intelligence exists at all as a discrete stage in brain hardware
design, the human simply is not born at that level. The new underlying principles are all based on the concept of
"vicarious experience," a symmetry not between objects we see but between our selves and other intelligent actors.
We can learn to EMULATE the semiotic level, through learning and disciple -- but not exactly like
that simple URL I just sent you. The 2012 Neural Networks paper says a lot more.

But then, zingers two or three.

It seems clear from the math that human-like semiotic intelligence is not the highest level of AGI which can be built.
It has two clear limitations:

1. It does not exploit what quantum mechanics offers us in increasing fundamental computing capability, which can be assimilated into
AGI to give it more power;

2. The treatment of symmetry is inexact and a weak approximation in some ways, which can be improved by a kind
of "multimodular" design (NOT LIKE today's cartoon multimodular or multirobots coordination systems) which
better exploits symmetry of such modules. (This is maybe more like those science fiction novels where
people swap brain implant disks to quickly learn from other folks -- a beautiful metaphor, but I'm not advocating doing it
with humans.)

An AGI at any level would be justified (crudely) in viewing an AGI at a lower level as "not really conscious."
Thus it is possible to build computers who might assert that human brains, even semiotic disciplined human brains,
are not really "conscious." I see no evidence that the human brain has any hardware whatsoever to
make it into a quantum-level or multimodular-level AGI.

BUT.. I do see evidence that it may be a SUBSYSTEM of such, even with hardware to make that possible.
The same kind of discipline and capability training which allows us to emulate the semiotic level can
naturally extend, in my view, to a certain kind of natural collective intelligence, VERY distinct from the ugly
destructive forms of groupthink and social systems which suppress creativity which often try to hide
behind that sort of concept.

======

But: this email is already too long, and the details to fill in that last paragraph are not just a sentence or two.

Best of luck,

Paul

No comments:

Post a Comment