Friday, July 15, 2022

Conflict of interest COI as a crucial mathematical challenge

 


 

First, I repost what I just sent to a list deeply involved in policies which will affect the future of internet/AGI/IOT:

Could it be that lack of understanding of COI mathematics for intelligent systems might be the most serious threat to human survival itself at present, because of how it connects to design of internet and to the challenges to building true human-centered internet? 

This question hits me quite forcibly as I prepare for a plenary talk on the future of internet/AGI/IOT for the next WCCI conference. Since about 1990, WCCI was the lead conference in the world developing true AGI technologies like the "deep learning" revolution  which has only just begun a radical remake of life on earth. (See the one page abstract attached for the overview. )

But will it raise the net value of having an internet or -- following recent trends - will it set in motion a chain of events which simply kill us all in the end?

There is lots of buzz out there in policy circles, but they still seem almost totally oblivious to the biggest, most implacable looking modes of instability and  collapse, and to the fundamental  mathematical principles which could save us, with more work (which no one is doing now.)

Of these... I would say that the concrete requirements to prevent a COI based collapse are the most important. Unmanaged COI effects through both the human and the app components can take many forms. (Stability problems tend to be like what.) When lawyers or coders focus too exclusively on special cases, they can easily generate systems easily flooded out.

Sadly, this email, written on a small tablet as I travel to wcci2022, cannot do full justice into the pieces I already know about risks and solutions, let alone the essential new crossdisciplinary research not being done. Just a few key data points.

As in my sii proposal.. new platforms for integrated  markets, designed to harmonize many apps and human players, are essential. The experience of the teams designing new integrative markets for electric power transmission systems is one crucial part of what we all need to understand and build on better.

One small but decisive part of that learning experience was the creation of something called Sarbanes Oxley. A clear new set of rules constraining information flows was necessary to prevent collapse, due to new modes of instability resulting from New market flexibility (which is ever more necessary in managing a system which is ever more complex demanding paths to allow new  technologies).

This example is closely related to proving convergence to correct results in DHP, an important RLADP technology discussed for example in chapter 13 of the Handook of Intelligent Control. (That chapter is posted at werbos.com/mind.htm .) Unrestricted inputs to all major components of an otherwise intelligent system really  can cause collapse. COI is a matter of mapping out what the major components are, and what flows of information are allowed. 

This can be done in principle with computerized markets (whether for goods or for information) [Google asks for a market in gods, not goods. Not in this email!!] in managing apps from a unifying RLADP market system rather than in managing humans. When training feedback is channelled from human to human directly.. the effects of corruption and bias are so overwhelming across all of human history (as I can see even when walking down the street in so many places) that we would all be dead long ago if previous centuries had had technologies like what we are building now.

I say that on a day when I walk through Ravenna... after many many other pivotal places.

I really hope someone in a more professional position will be willing and  able to do justice to what I am too old and limited to lead myself, much as I will try to help if you do.

*****************************

I mentioned just two examples. I hope they ask for some of the crucial details.. like three more important examples.

 

In my talk to the IEEE sustainability conference (which Yeshua helped me with), posted at build-a-world.org, I noted how COI has turned out to be the number one obstacle to the cost-effective new technology deployment which we would need, in order to stop the present drift towards total extinction of humans, sooner than most people know is coming. COI in today's governance systems in all major nations.... could be prevented by new governance/market integration platforms with the necessary level of COI protection.

Second -- climate issues illustrate information flow issues which were solved, in nature, by the evolution of the basic mammal brain. (See attached abstracts with links.). A key property is that neocortex is evolved based on error measures like accuracy in prediction, quite different from what action choices "are paid." Here in Italy, I am reminded of several great novels by Stendahl (not to mention Dante) about intelligence organizations paid to pacify the boss by felling him what he wants to hear. Simple separation of allowed app types into types like networks of truth, like the neocortex university of the brain, versus action final layoffs, already handles about half the problems.

Also a technical detail. Even in classical econometrics, as in the text by johnson, there is a technique called instrumental regression, aimed at preventing a type of bias which can happen even in simple linear statistics, when inappropriate inputs are included but not sanitized. But proper use of time, as in werbos.com/Erdos.pdf, handles part of the problem more directly and efficiently.

 

======

The mathematical equivalences across different measures of value, and of feedback or prices used to guide actirs within a larger system, play a crucial role. In meditation today.. i think...what harmony or peace really require is correct credit assignment in  the larger system.