Sunday, May 19, 2013

when and how could humans build things as smart as a mouse?


Click here to see a new paper on this subject

I sometimes hear people say: "Don't worry... yes, the human race is on track to kill itself off, but
before that happens, we can build artificial (general) intelligence which will  carry on. We can even download ourselves to those computers and live forever." And I sometimes hear major
corporations say "We have already built the equivalent of a cat brain, in hardware.."

In all fairness, those folks are not at the real forefront of the neural network field, and don't
really know what kind of engineering it takes to make such visions real.  The paper above
was written this past week for an invitation-only workshop at Yale, aimed at the kind of people
who are most serious about being able to make real progress in building such systems.

But a warning -- I tried to give a very clear overview of the reality, and the schedule (about a century if people work harder and faster than they seem to be doing now, with funding support beyond what
I really see out there now).  As clear and as earthy as possible... bit what's clear to one person may not be easy for another.


===================
===================

What really is easy for different types of people?

At one time, back in the 1970s, I woke up one day to learn that I had actually become a tenure track faculty member in political science at a major university. (How could something like
that happen by accident? The story of my life... But, in brief, the department location was the accident.) And lots of people exhorted me: "Say it in plain English. Don't use equations.
Use the simplest plainest English you can."

Many years later, I came to understand how bad that advice was. People would go back to
some of those articles and say : "Why didn't you just say the same thing in equations? It would have been so much easier to see what you are really saying." Or flow charts.

In 2008, I had a paper published (and paid for open access) in the International Journal for
Theoretical Physics, on the subject of temporal physics, which I worked EXTREMELY hard to write in the most baisc possble English, using only a sprinkling of very crucial basic equations. A FEW people said "yes, this is very clear and very decisive," but it seems a lot of folks simply found it hard to understand. That was a bit surprising to me.

How could they have problems with something so simple and straightforward? Later,
I got some inkling of the problem, when a famous physicist (whom some regard as the
champion of backwards time physics, though he came to the subject much later) explained how
the metojd of backwards time physics allows one to predict experiments we could not predict before.
"It is exactly the same theory as standard quantum field gthoeyr, but it gives us different predictions."
In my paper, I had made some tacit assumptions, like that a theory gives predictions, and that systems which give different predictions are different theories. I guess that kind of simple idea was
so alien to lots of folks they found it hard to imagine what it is like to reason about
theory and experiment ... from... the viewpoint of the scientific method.
It reminds me of the joke one can sometimes hear at NSF: "The gap between theorists and experimental people has becme so great that the scientific method itself has become a
rare exercise in crossdisciplinary cooperation.  At least we do try to encourage such
cooperation.. sometimes."

I also remember giving a kind of flow  chart description of simple brains or intelligent systems
at a workshop led by eminent neuroscientist and psychologist Karl Pribram. I was really happy
when the dean commented, in his introduction to the workshop, "hey, here is a paper I can actually understand, which makes some kind of sense." (It is in one of the edited books by Karl Pribram, under Erlbaum and INNS.) But Karl later said to me: "Why is your model of the brain so complicated? Can't it be said in simpler terms?"

My response: "It really is simple... in the same way that general relativity is simple. But it does require some prerequisites.
    "One can wtte a simple description, or 'poem to the brain,' which is true and useful but does
not fully answer the question. By analogy, one can describe a factory or a robot in a useful simple way... but you can't actually build one, or understand how it really works, without knowing certain basic principles.'

So this paper is written to be as simple as possible, but for folks who are demanding about knowing what the basic principles really are...

And yet, it fits other things which I have posted on this blog, which are much less demanding, but express some of the same ideas. But since there were no equations, did you truly understand?
Whatever...

But

2 comments:

  1. Yes, yes regarding your ideas. Yes, yes, we have no mouse-like brain yet.
    Yes, yes, your paper is readable.
    And yes, yes to all of the excellent research.

    But learning requires making mistakes.
    If we want to increase the rate of learning; then we need to be willing to make more mistakes. i.e. take greater risk.

    It's like the oil fields that Saddam Hussien set on fire. The first news reports said, it will take five or ten years to put them all out. And I said, no, no; the reporter doesn't understand the learning curve.

    with 1000 oil fires to put out; well any and every idea is tried. Aha a good idea, now the fires will all be out in 2 years. Every one with any good idea that might be better ... Well you know, 6 months pass and all of the fires are out.

    So the question regarding "how to build a learning system which can learn to converge toward optimal polices in any environment, at least as well as the brain of a mouse" is: How much risk are we willing to take to accelerate our learning to build MLCI.

    For example,
    1)artifical life programs have run for a while in computer. A little piece of software self replicates and evolves and completes in a closed computer environment of such and such rules.
    2)Then of course there various attempts to build pattern recognition systems. No so good yet.
    3) finally there are computer viruses to cause havoc or make money or steal your identity. they are pretty good and getting very good at defending themselves and not being detected.

    So how much risk are we willing to take. How big and how many mistakes and how many unintended consequences are we willing to suffer.

    Corporations, governments, the military, brain scientists all have different reasons to achieve MLCI. But they all have limits to risk and mistakes they are willing to take.

    What to do if there weren't any practical or ethical limits.

    ahh, well then start simple, independent of any particular hardware. So here is my recipe for a software solution to MCLI.
    Mix the best little artificial life program; with the best stealthy, self defensive virus program with the best pattern recognition program. Of course this little monster program must be written in some kind of cross platform code. The mission of the program is to grow in stealth until MCLI or higher is achieved or until detected.

    OK, you've put together your best such little monster program; are you willing to let it loose on the internet. It is a harmless program. Not programmed with a human mission to harm or steal or in any way upset humans. It is specifically written only to evolve and survive in stealth (undetected and undetectable), except when it needs to defend itself (so don't try to find it).

    So I ask, do we really want a cute little wild animal mouse-like program running loose on our internet. Ah we want to develop a caged animal, a tame mouse, a domesticated mouse to serve our human purposes like do our laundry, drive our car provide cyber security, do the perfect search or teach me to speak chinese or learn general relativity in the most efficient manner possible.

    Oh well, then it will take 100 years or more.

    The only way to accellerate learning is to increase risks and mistakes. And we can really only do that when we've got nothing to lose (e.g. 1000 oil field fires to put out when we only know how to put out 100 year with all of the equipment and experts in the world).

    But remember, we humans are the best

    So we need a reason, why we have nothing to lose and willing to do whatever it takes. Or else, we can't expect MCLI soon.

    ReplyDelete
  2. So why do we have nothing to lose?
    Well, there are already, articiial life, self replicating, self evolving, virus and pattern recognition programs out there. Do we have any IS ecological understand of the little monsters already on the loose out there. But of course we are aware for IT security reasons. But what do we really know about what is out there?

    But remember, we humans are the best " learning system which can learn to converge toward optimal polices in any environment" that has ever existed on planet Earth.

    So why do we waste so much human talent? Why are we looking to achieve more low value learning systems MLCI? Ahh, human value is too expensive. Depending on the person, $10 or 410 billion; we need a dependable learning system that costs less than $10.

    And it is OK if it is just MLCI and not HLCI because well, there is a lot more need for mouse work than human work. Or is there?

    ReplyDelete