Thursday, July 28, 2016

AI management algorithms and future jobs and terrorism

Many thanks to .... for sending us to an article which 
is relevant to more than just the future of AI:

The article calls for serious study, and poses serious questions, which I need to think more about. But I can see right away that it has a very strong resonance or even synchronicity with the NATO workshop on terrorism and new technology, which ended yesterday, which was in many ways a follow-on to the discussion of future mass unemployment by the Millennium Project at the Woodrow Wilson Center last week which I mentioned. 

When they discussed how future jobs may only be enough for 30% or less of the workforce... well, there are multiple connections to 
the future of terrorism. IT and automation are central to both.
We got a bit further into the realities of AI here, but I already posted a URL to that. Kahneman sounds closer to the realities of intelligent systems than a lot of people who talk as if they were experts on that subject, but I don't yet know how much up to date he is with the more advanced developments.

A key question: is it possible to design a new kind of internet 
algorithm, which does for the marketplace of ideas and IOT what
the Independent System Operator (ISO) algorithms now do for electricity markets, providing a level of reliability, security and longevity far beyond what today's financial markets have, while fully honoring the
nonzerosum nature of human life and pushing interactions away from the fragmented dysfunctional communication patterns which have reinforced terrorism and ideological extremism of all kinds in recent years and gotten in the way of real fundamental creativity and human potential?  Pushing towards a Pareto optimum which is far more equitable and sustainable than the aging systems leading to grosser and grosser inequality (where more absolute centralization and short-term energy imbalances have already led to nonsustainable levels of corruption in most of the earth)? Recognizing the reality that DNA is part of the system, and that government attention to DNA in a concrete way can lead to gross mismanagement and collapse as bad as government micromanagement of where resources flow? 

I will study Kahneman's paper with an eye to this question. 
I am tempted to say more, but maybe this is already more than enough for one post. I do not claim to have an answer to these very important questions -- questions which certainly do link to that other nasty question about terminator AI, which has held me back in the past from these questions.


For completeness, I should add that the issues of balance and social contract implicit here do remind me of issues faced by Meng Tzu and by the Tang dynasty, as well as familiar folks like Jefferson, all of which will be part of my thinking if I get further here. None of those people solved the social contract or covenant issues enough to allow us to cope with the realities of today, though both provided an important starting points, clearly more sustainable than what anyone is relying on lately in the Middle East or in the groups funded by the US fifth column! Also, the group has already begun to discuss how restructuring and strengthening both of ILO and of SEC would be part of what is needed. The concept of "trust" in conventional expert systems and networks,
and the current IBM business plan for IOT, reflect a kind of programmers vision of central control which
we really need to get past to have any hope of building a new more sustainable kind of system, reflecting, for example, the reality that different people do not trust the same sources. ETH Zurich touched on some of the related questions at the NATO workshop.


After reading the Kahneman article, I sent them my comments:

Am only halfway through the Kahneman article, but strong pros and cons already appear.

On the positive side... it is interesting to see the common points of his behavioral economics with our emerging understanding of brain intelligence. But their "irrationality" flag can be misleading. For example, is finite information bandwidth really an example of irrationality? When brains try to make sense of  a huge influx of information -- millions of variables - some kind of parsimony mechanism is not only necessary but highly rational. As it is in statistics. I have heard that psilocybin can loosen that bandwidth filtering mechanism in human brians, but that it causes wild outcomes not unlike plain ordinary heretoscedasticity in statistics. The overconfidence due to efforts to make sense of the world... is also less a matter of irrationality as finite intelligence. But Kahneman is right, as Raiffa was right, that training can help our finite intelligence emerge into greater competence.

So that part was basically very good, the basis of his Nobel Prize. But then the decision-making "algorithms" (procedures) remind me a lot of procedures I have seen in the most atrocious ineffective government funding mechanisms. I suppose that someday I should better write up what I saw in almost 30 years of funding panels at NSF and in interagency activities, across many disciplines.  Asking six standard fuzzy global questions is just very weak, and of narrow bandwidth, compared with really intense substantive questions based on feedback and iteration which fully mobilizes the intelligence and foresight abilities of human experts, aimed at deeper dialogue. Real creativity in many areas of basic technology, and in deciding what larger projects to fund, demands the higher level of collective intelligence possible with such approaches.   The old 6-to-20 variable fuzzy scales simply can't do the job, not well enough for us to survive. (For example, it does not identify huge unmet opportunities in solar farms, in space technology, in quantum computing, and others.)

I am also intrigued that IBM was once on the path to doing more of the rational management guidance that Kahneman was looking for... until a certain Oprah level of financial geniuses decided that intelligence = Watson,
and that the future of humanity should be a unitary expert system without any utility functions at all, which nevertheless is supposed to control all actuators in the world (including wired-up human brains) via IOT. 

But... will the second part of the article get into real AI? We will see... 
second half of kahneman article --

He is very reasonable but nonexpert on AI as such. As a business consultant, Kahneman echoes what we have heard from many other business experts, that a huge drop in jobs due to technology is now on the way. 

There are a few important subtleties here. One is that people are sometimes replaced by machines or by outsourcing because of the irrationalities he is expert on. Internal corporate salespeople, trying to advance their own careers, may sell their bosses on the idea that their voicemail systems or 
helpers in rural India are every bit as helpful to their customers as 
the humans now doing the job -- but customers may actually suffer and may even be moved to look violently for competing companies if they can find any. In my view, artificial stupidity is every bit as serious a threat as artificial intelligence.  Error-prone profiling systems, or systems which throw out creativity as much as they throw out troublemakers, are part of that threat.

Kahneman also makes interesting points about healthcare. Like anyone well-grounded in federal budgets, I see the growth in health care costs and the drains due to the unproductive type of tax loophole as the main things we need to attack, in order to prevent the coming sequestration from making employment problems much worse than they already are in the US. 
But if a rise in productivity and automation really poses a huge employment problem in coming decades, much worse than today's cyclical problems, maybe we should look twice at the implications for healthcare. The present ways of growing waste of money in healthcare still need to be cut short,
but what of allowing a growth in things like more human caring of humans for humans, as in things like alternative medicine and home care? I have seen excess employment in hospitals leading to folks milling around doing more harm than good, and even accelerating the opiate addiction problem, but maybe a redirection of healthcare costs (and serious continued growth, albeit slower and with less drugs) might make more sense than I thought last week...

Maybe. But then again, it would be better if we could trust market mechanisms somehow more... new market design... to decide how much of future GNP should go into healthcare. 

Best of luck,

No comments:

Post a Comment