Congress Impatient for Zuckerberg Privacy Testimony

capitol-building-543598_960_720

This week, Facebook CEO Mark Zuckerberg will testify before three congressional committees to answer questions about the privacy scandal that’s engulfed the social media platform since mid-March, when The New York Times and The Observer of London reported that a British voter-profiling company had obtained and analyzed data on 50 million or more Facebook users in attempt to influence the 2016 U.S. presidential election.

Although this isn’t the first time Facebook has found itself amid controversy regarding its privacy practices, the latest storm has attracted unprecedented attention from lawmakers in Washington. With Facebook in Congress’s crosshairs, America’s leading Internet companies—sometimes known as “big tech”—arguably face a greater risk of regulation than at any time in their relatively brief history. Despite the political appeal of setting new rules for tech companies, however, cracking down on big Internet platforms won’t meaningfully protect our privacy or safeguard U.S. democracy. Instead, it will entrench existing firms at the expense of innovation and dynamism.

How Did Cambridge Analytica Collect Data on 87 Million Facebook Users?

First, what happened with Facebook? Between 2013 and 2015, about 270,000 Facebook users installed a personality quiz app created by Aleksandr Kogan, a psychology researcher affiliated with Cambridge University. Most users were paid a few dollars in exchange for taking the quizzes. Each time a user installed the app, it would request permission to download some of the data that user could access, including the user’s friends list and which pages these friends had “liked.” Given that a typical Facebook user has a few hundred friends, the app managed to glean the likes of roughly 87 million Facebook users living in the United States. A Facebook privacy setting allowed users to prevent their data from being accessed by their friends’ apps, but this setting could be found only in an obscure menu that most users probably never visited.

Although the app initially collected user data for academic research, Kogan maintains that he changed the app’s terms in 2014 such that it would collect data for “commercial use.” Regardless of the app’s terms, however, Facebook’s platform policy bars developers from transferring data collected about Facebook users to third parties. In contravention of this policy, Kogan handed over the trove of user information to Cambridge Analytica, a data mining firm that used the data to, among other things, try to help Donald Trump win the 2016 U.S. presidential election. In 2015, when Facebook realized that Kogan had misappropriated user data, it pulled his app and required that both he and Cambridge Analytica certify that they had destroyed all the data they’d collected.

In 2015, Facebook overhauled its application programming interface (API), unveiling its “Graph 2.0,” which significantly limited the types of information that apps could extract about their users’ friends. Although apps can still request permission to obtain a list of their users’ friends, and friends’ basic profile information—such as each friend’s birthday, workplace, and education—friends’ detailed information is off limits.

Where Did Facebook Go Wrong?

With the benefit of hindsight, it’s easy to see the many ways in which Facebook messed up. Its engineers should have thought twice before creating an API that enabled apps to extract so much data from not only their users, but their users’ friends, too. It should have raised red flags when Kogan changed his app from a research tool to a commercial service. And when Facebook learned that Kogan had shared a ton of user data with Cambridge Analytica, it should have done much more than simply require a “certification” that the data had been destroyed.

Facebook has largely admitted this much. In a Q&A recently posted to the company’s website, Mark Zuckerberg owned up to making several mistakes and pledged to more effectively safeguard user privacy in the future. Facebook has also redesigned its privacy settings and announced plans to rewrite its data policy. Facebook’s critics, however, have noted that this is hardly the first time that Zuckerberg has apologized and promised to do better. 

But this saga is far from over. On April 10 and 11, Mark Zuckerberg is scheduled to testify before a joint Senate Judiciary and Commerce Committee hearing and a House Energy and Commerce Committee hearing. Cambridge Analytics is under investigation by the British government. And Facebook is reportedly under investigation by the governments of Australia, the United Kingdom, the U.S. Federal Trade Commission (FTC), and several state attorneys general.

Perhaps the most important of these investigations is the one underway at the FTC, a federal agency that’s empowered to go after U.S. companies that engage in “unfair or deceptive acts or practices.” In 2011, the FTC and Facebook entered into a consent order involving various misrepresentations the company allegedly made involving user privacy. In the order, Facebook agreed that it would, among other things, maintain a privacy program designed to protect the confidentiality of user information and obtain users’ “affirmative express consent” before sharing their information with third parties in excess of their privacy settings.

Now that we know Cambridge Analytica acquired the data of 87 million Facebook users through the platform’s API, did Facebook violate its consent order with the FTC? It’s far from clear, at least based on reporting to date about what happened. Although Facebook is required to prominently tell users whenever it shares their data in a way that “materially exceeds” the user’s privacy settings, Kogan’s quiz app gleaned information from users who opted to install the app—and whose friends had, in turn, opted to share their likes, check-ins, and so forth with their friends. If your friends can see something you shared on Facebook, whether it’s a page you liked or a story you shared on your timeline, it follows that your friends’ apps can see that same piece of data. And that’s basically how Facebook’s API functioned at the time.

Yet for many users, this wasn’t their understanding of how information sharing worked on Facebook. Sharing data you’ve chosen to make visible to your friends is one thing; sharing that data with third-party apps your friends have installed is another matter—and should require separate permissions. This presumption may seem counterintuitive to engineers and data scientists, even if it seems perfectly natural to ordinary users.

To the extent that users didn’t understand how Facebook’s API functioned, was it because Facebook misrepresented how its platform worked? Or was it simply another manifestation of the disconnect between typical users and engineers, each of whom have a very different perspective when it comes to information sharing on digital platforms? It’s beyond doubt that Facebook could have done more to educate its users about how apps interacted with the service, especially with respect to data shared by a user’s friends. But failing to explain the Graph API is very different from misrepresenting Facebook’s privacy practices.

What about Facebook’s reliance on Cambridge Analytica’s false statement that it had destroyed the data it collected through Kogan’s app? Facebook could have gone further, perhaps by auditing the company. In hindsight, it’s obvious that Facebook should have done much more to verify the destruction of its users’ information. But Facebook’s naivete with respect to app developers that collected user information looks much less unreasonable.  

We routinely rely on contractual agreements in modern commerce, including when we decide to share information about ourselves with Internet services. When companies share information with other companies, it’s commonplace for both sides to rely on the other party’s representations. Just as businesses sometimes break their promises in the offline world, not every firm that collects information on the Internet is scrupulous. Given that Facebook reportedly had an established relationship with Kogan, perhaps it was not so unreasonable to rely on his certification that neither he nor Cambridge Analytica had retained any user data. If the data had already ended up in the hands of bad actors, it’s not clear if an audit would have even detected the transfer.

Will the Government Overreact to Facebook’s Mistakes?

Few people dispute that a company should be punished if it materially deceives its users. But it’s a much tougher case when, say, a social media service—acting in good faith through engineers, designers, lawyers, and so forth—rolls out a data sharing tool that some users don’t fully understand. Second-guessing Facebook’s decisionmaking circa 2015 is easy. But striking the right balance between neat new features and intuitive privacy controls is hard.

If the FTC slaps Facebook with a fine that could, in theory, exceed a trillion dollars because the company failed to better explain to its users how third-party apps could access their friends’ information would certainly give tech companies a strong incentive to be more careful about user privacy. But the chilling effects of a massive fine might end up deterring beneficial experimentation with new ways to share and use information online. Occasional mistakes are a natural part of the innovative process, and not every mistake warrants governmental intervention.

Despite the oft-recited refrain that Silicon Valley is in the business of exploiting user data, the world’s most valuable private company—Apple—goes out of its way to tout its track record on privacy protection. In an interview last week, Apple CEO Tim Cook lambasted Facebook and other tech firms for their data practices. But as Mark Zuckerberg observed, the world is full of consumers who can’t afford to pay for expensive devices and subscriptions. There is no “right” business model. Instead, the marketplace puts competing business models to the test every day.

Regardless of how the government responds, Facebook deserves to pay a market price for its missteps. And it is: The New York Times reported recently that the hashtag #DeleteFacebook appeared on Twitter over 40,000 times in a single day. But users who opt to exit the platform entirely may not be the biggest downside for Facebook. Many more users might stick with the platform but reduce their engagement, share fewer posts, “like” fewer pages, or tighten their privacy settings.

As members of Congress question Mark Zuckerberg this week, they should keep in mind that he and his Harvard classmates created a platform that’s connected not only the vast majority of Americans, but well over 2 billion people around the world. Now that Facebook is a multi-billion dollar company, it might be able to live with more regulation. But what about the next Facebook?

See also my colleague Wayne Crews’ column for Forbes.com yesterday, “Mark Zuckerberg Testimony: Will Washington Cast The First Stone At Facebook?