The last few weeks have not been good ones for the large internet platforms—Facebook, Google, and Twitter. Facebook founder Mark Zuckerberg asserted after last year’s election that it was “crazy” to think that his company had any influence on it. But Sheryl Sandberg, Facebook’s Chief Operating Officer, had to spend a week in Washington doing mea culpas as it was recently revealed that the Russians had bought political advertising during the campaign. Twitter had been notified that a handle called @TEN_GOP pretending to be the mouthpiece of the Tennessee Republican Party was actually a Russian troll spewing racist and divisive messages, and that it had not been taken down for months after the real Party organization notified the company. More executives from the platforms will be dragged in front of Congressional committees in the coming week and grilled over their responsibilities to American democracy.
The internet and the rise of social media has changed the terms of the free speech debate worldwide. There has always been bad information, propaganda, and disinformation deliberately put out to affect political outcomes. The traditional free speech defense has been the marketplace of ideas: if there is bad information, the solution is not to censor or regulate it, but to put out good information, which will eventually counter the bad. More information is always better. But it’s not clear that this strategy works so well in the internet age, when thousands of bots and trolls can amplify the bad messages without anyone knowing. The platforms’ business models exacerbate the problem with algorithms that optimize for virality and accelerate the rate at which conspiracy stories and controversial posts are passed along.
The platforms, for their part, argue that they are just that: neutral technology platforms on which their users share information, just as a phone company connects telephone users. The legal regime left over from the 1990s reinforces this view, since it exempts them from liability for materials they host on the grounds that they are conduits and not media companies. But they are not neutral: their business model is built around their knowledge of their users’ likes and preferences, which they use to tailor advertising toward them. This is precisely what politically-driven firms like Cambridge Analytica did deliberately on Trump’s behalf during the campaign. Only the platforms have the power to do this on a global basis.
The sudden recognition of the prevalence of fake news, targeted advertising, and manipulation of these systems by a hostile foreign power has naturally led to a reaction in the form of calls, and in some cases action, to regulate the internet. The most notable case is the German law passed by the Bundestag over the summer to criminalize fake news, setting huge penalties of up to €50 million for platforms that allow such content to appear. In the United States, Mark Warner, John McCain, and Amy Klobuchar have introduced a bill that would require platforms to disclose information about purchasers of political advertising on the internet; others have suggested banning foreigners from doing so altogether. Such measures would simply bring internet rules in line with those already set for television, though enforcing them would be considerably more difficult.
In confronting the social media challenge to democracy, a longstanding political divide has appeared between Europe and the United States. Among developed democracies, the American First Amendment stance on free speech has always been exceptional, putting few if any limits on political expression. Most European countries by contrast have been more willing to criminalize certain forms of hate speech such as Holocaust denial. In general Europeans are more willing to use state power to regulate behavior, based on their more benign view of the state as a neutral protector of public interest. State-sponsored public broadcasting—one obvious way of combatting fake news—is far more prevalent in Europe than in the United States, and indeed is a condition for membership in the Council of Europe. Americans, by contrast, are much more ready to see the state as a threat to individual freedom. The Public Broadcasting Service has never been seen as a neutral purveyor of public interest. It has been attacked from the start by conservatives, with some justice, as a captive of the the Left.
It is not clear at the present moment whether state regulation is even possible in the United States, given the country’s underlying degree of polarization. Banning foreigners from buying political ads might work, but any effort to control content will run afoul both of First Amendment protections, and of political disagreement. It is hard to imagine government regulation of fake news when the President himself is one of the biggest purveyors of the genre, and has turned the very words “fake news” into an epithet he uses against his critics.
This means that the burden of any move towards control of bad information will have to rest in the United States on the platforms themselves. They are coming under huge pressure from their users, advertisers, and their own employees to step up to the responsibility of seeing themselves not just as neutral platforms but as media companies that have a responsibility for curating the content they provide. They have already been forced to play such a role with regard to terrorist content, child pornography, and cyber-bullying through changes to their terms of service. They need to go further than this, however, by changing the algorithms that promote certain kinds of sensational stories that have harmful political effects. This is not a free speech issue: the First Amendment does not, as far as I am aware, protect the rights of bots to replicate messages on a global scale at a speed limited only by network latency.
There is a further problem, however, that will not be solved by self-regulation, which is the problem of scale. In a healthy democratic political system, media companies will compete with one another to provide alternative points of view, subject to certain baseline journalistic standards. Such companies take particular political slants, but there is enough diversity to ensure some form of overall balance: if you don’t like the New York Times, you can always turn to the Wall Street Journal.
This is not the situation that prevails in today’s internet world. There are not a variety of competing platforms with differing points of view; rather, there is Facebook, which has become a sort of global utility. Facebook does not have a clear political agenda, and is motivated by profit-maximization, which probably ensures that it will not want to annoy any large group of users by appearing biased. On the other hand, it de facto exercises a huge amount of control over what its users see on a virtual monopoly basis. There are entire countries where Facebook Messenger has replaced email as the primary channel by which people communicate. This kind of power wielded at such a scale is unprecedented in human experience, and we need to think carefully about whether American democracy can continue to coexist with such power concentrated over the longer run.