To begin with, the best changes will be those that are measured, narrowly-tailored, and don’t overreach just because of the volume (both quantity and sound-level) of the outraged commentary. Policymakers must be cautious to distinguish between questions about ordinary business disputes and genuine public policy concerns. In the present case of Facebook, it seems clear there was no data breach per se; rather, another company used data gleaned on Facebook in a way that violated Facebook’s own policies. This is not necessarily or automatically a public policy question, but rather a business-practices question that companies are already reacting to by reexamining existing policies and contractual relationships. Fraud and breach of contract are, after all, already illegal.
I’ve also been hearing pundits referring to the “weaponization” of social media in recent days (Oregon’s Sen. Ron Wyden used the term in a letter recently to Facebook’s Mark Zuckerberg). Policymakers must be careful with such fast and loose military-style language. It won’t help anyone to apply such a term to practices that are merely annoying, politically unpopular, or a private violation of contract. When and if a real threat to civil liberties emerges from the private social media data ecosystem, we need to be able to identify it clearly. That will need to be a different debate than the one politicians likely have the appetite to discuss, because if policymakers intend to say free speech or persuasion (even “fake” persuasion) is being weaponized, and something needs to be done about it, then what they are saying is that federal government needs to control speech and communication. Done right, such a debate would entail more fundamental, constitutional questions over what lines politics, wealth redistribution, and regulations should not cross. Put another way, if we live under limited government such that some groups cannot use the political system to compel or plunder others, any potential harms from malicious fringe-site denizens & “fake news” via alleged “manipulation” of the democratic process through misuse of data are minimized from the get go. It is when governments exert great power over our lives that “weaponization” has something to offer (or, to take).
Relatedly, those set to contend that new government policies will be the source of enhanced privacy have a steep hill to climb, given the alarming record of privacy invasions and breaches coming from the federal government itself, rather than the commercial sector. While the Cambridge Analytica case is a serious one, our problem has long been not whether companies will or can protect our privacy online, but whether the government will allow it. There are more serious concerns which Cambridge Analytica can distract from, such as: what access does government have to our online data and is that properly disclosed? Electronic communications privacy from government is what we should also be debating, rather than our attention being diverted from it. Such deeper concerns are what social-media privacy hysteria conveniently obscures.
The question some like Sens. Wyden (D-OR) or Ed Markey (D-MA), (who seek to grill Zuckerberg in congressional hearings) appear to be trying to pose with Cambridge Analytica is, is the Internet bad for democracy? Not likely. Facebook is not a government with the power of compulsion. It is just one private actor, on an Internet that is vast with or without Facebook. All the other social and legacy media were and are still out there. Democracy thrives on competing biases, not pretended objectivity anyway, and that is more than just fine, it is the safeguard of political and personal liberty.
The Internet can accommodate multiple privacy preferences, from the exhibitionists to the hermits, when it comes to data policy; it is less clear that regulation can play that role that well. Misguided or cynical regulation, or that motivated by political score-settling, will leave us with less control over our information. It will also damage our ability to be ready for the Cambridge Analyticas of the future. Emergent online services—like augmented reality, driverless cars and smart cities—will all raise new privacy questions, presenting new opportunities and threats over appropriate levels of information sharing. We do not have the answers to such questions today, and cannot predict them. Legislating or regulating now could obstruct the evolution of and availability of the next big things.