Wednesday, May 10, 2017

Reply to Vedanta Society on machine consciousness, possibility and threat

As part of a long discussion, one of the people on the Vedanta discussion list posted: 

The problem with his [Turing] test is that machine can one day pass it not by being more clever, but because the humans could regress, due to its growing reliance on them. A bit like a taxi man who can no more read a plan, due to its reliance on the GPS system ...


This is a very real threat to us today, no longer just a hypothetical possibility. I like the term "Artificial Stupidity" (AS). True strategic awareness would lead us to be very serious BOTH about the dangers of AI, AND the dangers of AS.

The painful tradeoff between AS and AI first hit me back in high school (1962-1964), when I asked: "What happens when we rely more on data -- statistics, analytics or new predictive systems -- to decide what to expect of a student, or of a possible criminal detected by the police? A less accurate system could end up being like an automated version of racial stereotypes, with all the horrible dysfunctional implications that has. A more accurate system might do less of that, but do more bending to the specific goals of the folks who own the system - with risks of things like extreme political abuse or enslaving people to a requirement to focus only on money." But of course, human learners performing the same tasks are subject to the same kinds of risks. The problem already existed, before there were machines.

The next big stimulus to my thinking on this subject came in the year 2000, when I had to skip the intriguing last day of a workshop in Cambridge organized by Brian Josephson (a day I still regret missing), in order to show up in time at an NSF-NASA workshop I had organized, to discuss how machine learning and robotics might be applied to a testbed challenge, the challenge of creating space solar power (SSP) as a useful energy source for the world. (Some of you may know that Abdul Kalam was a great champion of SSP. I was grateful to discuss this with him over dinner a few years ago. Most of the talks at the workshop are posted somewhere at www.werbos.com/space.htm.)

A key issue with SSP was how to assemble a large structure way up at geosynchronous orbit (about 30,000 kilometers above the earth),
where radiation hazards and distance make it even more difficult to support a human workforce than at low orbit. Therefore, using the full support of NSF, we were able to get leaders in robotics form all over the world to discuss the challenge. Many people were very excited by the hope of reducing cost by reducing the mass of what had to be lifted up from earth, for example by designing a 50 ton "seed crystal" containing robots to be sent to the moon, to reproduce themselves using materials on the moon, and supply lots of material for use all over earth orbit. 

As we evaluated that possibility, many images came into focus. Instead of designing superintelligent terminator type robots, they wanted the simplest possible robots capable of exponential growth in population on the moon. It began to seem more like an artificial cockroach, and less like an artificial human.   

Can you imagine what happens if humans caused an entire world (the moon) to be infested by exponential population growth of metal artificial cockroaches, intent on reproduction? And what of the risk that a few might ride along somehow, as cockroaches do, and get to earth? And what of the risk that natural selection on such a large population would make them less and less what we intended them to be? People came up with entertaining but unrealistic ideas for how to address these problems. It became more and more clear that these are very serious kinds of challenges. In addition, the technology for simple, less intelligent machines making machines already exists in prototype on earth, especially in Japan, and DOD has experimented with artificial cockroaches already in its efforts to "bug the enemy."
(Should I name names? Perhaps not today.) Thus AS is indeed an issue. Human-level intelligence is not needed for a risk to be real. 

A third dramatic warning came to me in 2013-2014, when a certain Congressman turned the screws on his idea of remaking NSF, so that instead of serving universities and basic science it would serve certain large stakeholder corporations he liked, more or less like some other bureaucracies which had been less pure for a long, long time. There were four important meetings (none governed by the privacy act) where spokesmen for IBM outlined their new "Watson" reorganization and plan for the future. (There was also a meeting representing priorities of the oil industry of the Middle East.)

At that time, their logic was as follows: "Soon every concrete decision in the world -- from pacemakers to vehicles to generators to factories to security systems to factories to cities -- will be executed by embedded chips or small computers, with internet connection. This is the Internet of Things (IOT). At present, the IOT is a chaotic and inefficient system, as things are not designed to work together as a whole system, or as an intelligent system.    Our business plan is to fix this. We will design and build a new control system for the global IOT, which will be highly intelligent and efficient, and will of course make lots of money. The famous Watson system will take over everything." "Yes," said the current local boss (ironically a Brahmin caste guy, a person more caste than Brahmin -- all groups have their "black sheep") ,"I want you all to give priority to helping make this real."

At one of the four meetings, a colleague who had spent his life studying rational decisions and industry networks, asked:
"What do you mean 'efficient'? Efficiency is a meaningless concept without a metric, without a measure of the values which the system is supposed to serve. Aggregating values is a difficult technical challenge. How do you plan to address it?"

The IBM guy looked very puzzled by this question, and took some time before he replied:"VALUES? Values, shmalues. Don't worry, we have excellent software engineers, and they know very well how to build perfectly efficient system." After a bit more silence and reflection he said," But yes, as I think over what some of our new systems might do, I suppose there could be some hotheads out there who might say we are crushing their (ugh) VALUES. But don't worry, we are building some very effective new security systems to take over those kinds of hotheads." (In a later meeting, they described how they have been building about twenty large physically secure server farms around the world, two planned to run the US government, emphasizing physical security and armed guards and such.)

Then a woman spoke: "This is quite a vivid picture of the new world you are building. But I have a problem trying to envision where the PEOPLE are in this Internet of Things?" Then the local boss smiled a big alligator smile, announcing how clever he was. "There is a very easy answer to that question. The answer is simply to change people into things. What do you think the purpose is of all this work on Brain-Computer Interface we are now pushing you to prioritize?" And they are further along in that work than most people would expect.
On my last day in that building (guess why I retired?), he said: "A lot of people have complained about what I have done for Lamar Smith, but (smile) that IS where the money comes from, and we have to be realistic." I believe I remember a small press piece where Smith also bragged about how he was directing the FBI people investigating Hillary Clinton, and I suppose that Comey was a lot like our own local boss in bending to certain kinds of pressures (which may have had lots of support from beyond Smith himself). There were also clear links to folks managing computer records.

So yes, there are serious risks here, and my six simple slides at www.werbos.com/IT_big_picture.pdf include links to more of the technical and political problems in play.

IMPORTANT CAVEAT: I have since heard that as word got out, IBM reconsidered its strategy. Some of the real champions of earth as a robot world moved to Accenture (a web site I studied closely just yesterday). I have met great and balanced people from IBM, and I hope that re-reorganization will empower them more to do more constructive things -- but it is not at all a good time to become complacent.
We are at a real crossroads in determining the future of IT, and in making sure that the deepest spiritual needs of humans are not trampled on. 

==========================
===========================

But none of this says whether AS or AI systems  can actually be "conscious." I still remember a guy from the UK who would look at a piece of computer hardware or software, and ask "But is it conscious or is it not?" I regard that as a preposterous question.
At  https://arxiv.org/abs/q-bio/0311006, I discuss how "consciousness" has many legitimate definitions, ALL of which are a matter of level or degree, not a "yes/no" situation. I do not advocate building computers which are more conscious than human brains are, but it certainly looks possible to me. I do hope that groups like the Vedanta society, higher yoga and their cousins can help humans manifest the higher level of consciousness which they are capable of attaining -- as is ever more important even to our survival as a species.

Best of luck,


    Paul   

=====================

I probably should have added... if Turing were reincarnated and alive today, I would guess that she would not be relying on that famous old Turing test in words... but rather on more mathematical metrics of intelligence or consciousness related to the concept of Turing machine, extended by various complexity measures and priors,
not unlike the ideas of Emmanuel Kant.

And: to understand how bad AS could be, try to imagine certain voicemail systems you have experienced making all the decisions on earth.

And: I deeply respect some of the people I have met from the Vedanta Society, but I am not a member myself. As a Quaker Universalist... I try to learn what I can from the whole of the human experience, none of which shows perfect knowledge but some of which is more constructive and enlightened than others. I see more variation in the degree of enlightenment WITHIN each of the world's great religions and philosophies than between them. 

No comments:

Post a Comment