Leading IT people have been debating what to do about regulating Facebook. My response to them:
==============================================
First of all, I really hope members of this group can help get us all out of a kind of descent into entropy, as legislators and other treat the Facebook case like a visit to the local ice cream parlor or cocktail party, making faces to show who they like and who they don't like. If the growing interactions between IT and the political systems of the world end up like that, driven by personalities rather than principles, the best case outcome is that we all end up on the path China is pursuing (which is not going uphill right now by any measure, as economic growth has only just begun to slow down).
IT'S NOT ABOUT WHETHER WE LIKE OLD ZUCKIE OR NOT! If we think of it that way, we are doomed. Really doomed. It's about a TESTBED.
Good lawyers understand that there are important test cases which need to be considered in a larger context: i.e., what kind of GENERAL principles can we extract from the case? (Good engineering research leaders also understand the importance of TESTBEDS or initial niche markets on the way to new technologies.)
A major problem in the Facebook testbed is that we don't have clear general principles, really. It is natural for all of us to veer away things which we don't have real answers for. But to get to a better outcome, SOMEONE -- some group of people, really -- has to have enough determination to face up to the key general questions for which we do not have answers.
And really, there are several very important Facebook testbeds out there which we need to face up to, important to the whole future of IT -- the personal data issue, the fake news issue and the cryptocurrency issue. All three will affect the entire IT industry. If we don't have the first one halfway straight, are we ready for the third? Congress has to deal with all three here and now, by acting or postponing or studying, but let's get to the first one.
We have all read a lot of righteous thumping about privacy and transparency. All that thumping reminds me of the work on personality types which David mentioned a few months ago. One of the really important personality variables is "tolerance of cognitive dissonance." That's the kind of trait which sometimes is very valuable, and sometimes very destructive. I ask myself: "What do I think of those political recruiters who look for people who loyally believe and act on every word in the Old Testament, in Atlas Shrugged and in the New Testament?" When people righteously thump their chests and rigorously demand mutually contradictory things, or just unattainable things, we get into problems. If people demand absolute privacy ("all information about any one or anything is totally owned by that person or thing, including derivative statements by others") AND absolute transparency (as in David Brin's proposal that all data should be available to all people), we will not be capable of regulation or design which allows coherent legislation or a rational evolution of the technology. We need to ask the question further. What are the concrete actionable PRINCIPLES which should be applied to the privacy versus transparency part of the facebook testbed, and to the general case?
That is such an obvious question... yet I cannot cite a really credible answer, addressing the basic tradeoffs. I am aware of the two extreme proposals I just mentioned. The IEEEUSA has a policy committee on communications and computing, and they issued a position paper last year which fleshes out the pro-privacy position as explicitly as I have seen anywhere. See https://ieeeusa.org/wp-
https://ieeeusa.org/
Looking for serious substance instead of random thumping, I went to scholar.google.com. Searching on "privacy transparency integrity," I didn't get crisp stuff, so next I searched on "security transparency code." That led to a few important pieces, like:
The key lesson there seems to be that we really need to pay more attention not to the loud issues about facebook users but to the persistent deep issues of surveillance and constraint of all those folks who have jobs in the world. A lot of our growing global dysfunction may be related more to the latter. My wife suggested searching at scholar.google.com on "privacy transparency software," and there is a lot to consider there.
In truth, the underlying dilemmas seem almost impossible, and depressing. There is a huge spectrum of theoretical possibilities between total privacy (shutting out even NSA and google) and Brin's vision of total transparency, related not only to social networks and communication s but to currencies, both crypto and traditional. I did a paper myself for NSF in 2014, suggesting read-only backdoors just for certified law enforcement (also too vague), but can the designers of operating systems, chips and communications links really build such systems and certify that they are what they seem in an open, transparent way without other unintended consequences? I see no sign of anyone even attempting that, even when global cyberwars against power grids have started to heat up and the US vulnerability becomes ever more obvious.
One possible guideline towards a viable "new social contract" on this: status quo ante. What could employees and employers, voters and politicians, press and people get from each other involuntarily BEFORE the advent of new technology? We should be able to live with that. It's not obvious that we can live with the chaos now just beginning.
Best of luck,
Paul
=============================================================== Addendum: it is hard to wrap one's mind around real possible solutions if one relies mainly on a "machine learning" foundation, as I do. To survive, we need both a brain AND an immune system. It amuses me in a sad way that we are starting so many cyberwars in the spirit of "machine learning gives us an edge," when the people doing that do not understand the powders AND LIMITS of machine learning as much as I do (or even as much as the Chinese do, far beyond the proud local novices). But how much true collective intelligence do we want to require in our new internet of things? And how much damage can happen if gullible buyers buy products to control their lives which have been overhyped and distorted much more than systems like SLS which Trump has just begun to learn about? Immune systems are more about satisficing than about traditional optimization, and simplicity and clarity are more at a premium there. It's less about creating new means of production, but preventing new means of predation and criminality (analogous to cancer). The term "social contract" is not used lightly here.
|
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment