It’s been just another month in the relentless mess of online manipulation. The European elections had the by-now habitual smorgasbord of bullshit that is so easy to churn out it makes a mockery of our outdated electoral laws.
Research by the London-based Institute of Strategic Dialogue spotted, among other delights: bot-nets in Spain pushing anti-Islam messages to support the new right-nationalist Vox party, fake Facebook accounts in Poland pretending to be pensioners and helping out the government by attacking striking teachers, and 24 Facebook pages in Italy spreading anti-Semitic and anti-vax content to 2.46 million followers, while also supporting the ruling populist Five Star and Lega parties. Disinformation is nothing new, but the difference is that it’s now easier to amplify to the masses, and easier for the official parties to deny any connection to these campaigns, and even to claim that they are merely the expression of concerned citizens’ freedom of speech.
The tension between freedom of speech and disinformation also hit new heights in the United States last month, where a bun-fight broke out after Donald Trump shared a video that seemed to show leading Democrat Nancy Pelosi slurring her words like a near incapacitated drunk. When it turned out the video had been purposefully slowed down to make her look sloshed, Facebook marked it as manipulated. This didn’t satisfy many punters who called for the video to be taken down entirely. But, countered others, weren’t calls for such takedowns an attack on freedom of speech?
The continuing failure of technology companies to deal with disinformation means that more regulation is now inevitable, whether it comes from government as in Europe, or through public pressure as in the United States. Get the regulatory approach right and it will help formulate rights and democracy in a digital age; get it wrong and it will exacerbate the very problems it is trying to solve, and play into the games of authoritarian regimes all agog to impose censorship and curb the free flow of actual information across borders.
In Europe we are seeing three approaches to regulation.
At one extreme is the German approach, which holds companies accountable for every piece of content that goes up on their platforms. If any content contravenes existing German law on “manifestly unlawful” speech (including hate speech and defamatory content), and the platforms don’t take it down tout de suite, they face whopping fines. It’s a whack-a-mole approach that tries to police the internet post-by-post and tweet-by-tweet. It equates the importance of something foolish or nasty blurted out on Facebook by your mother-in-law with a coordinated campaign by political actors, and encourages tech companies to play it safe and take down masses of content when they are not sure if it might be illegal or not.
The British approach is more in tune with the reality of the online world, where trying to police every comment is both impossible and pointless. Instead of trying to make platforms liable for every piece of content, the UK “White Paper on Online Harms” proposes to create a regulatory system where tech companies will have to ensure they have systems in place to mitigate the “harm” of online behavior—a bit like being obliged to have sprinklers and fire exits in a building in the event of a fire. The White Paper defines “disinformation” as what it calls “harmful but legal” content, which tech companies would probably not be expected to take down entirely but would need to, for example, down-rank and mark as inaccurate.
The “harmful but legal” category has appalled freedom of speech groups, who argue there is no such concept as “disinformation” in human rights law. They look more kindly on the French proposal, which is the lightest touch in Europe—stressing the need for tech companies to provide more transparency about how content is amplified and distributed on their platforms. The French proposal argues that if tech companies were more transparent, discrete decisions could be made between tech companies and a government regulator about problems as they emerge.
By fixating on “disinformation” as primarily about content, however, the European proposals and the various debates in the United States risk fundamentally misunderstanding the nature of online manipulation, missing its real dangers while setting themselves on an unavoidable collision course with the need to uphold freedom of speech. Consider the now well-documented Russian social media campaign in the United States. Plenty of the content the Kremlin pushed was neither true nor false, simply stating support for one or another cause. The “deception” here was not the content, but the behavior that promoted it and the actor behind it.
As a new Transatlantic Working Group on Content Moderation I am part of has been discussing, regulation (whether government- or industry-led) needs to veer away from a focus on content to a broader concept of “viral deception.” This would mean thinking more about how to regulate inauthentic behavior and amplification through covert, coordinated campaigns by bots, trolls and cyborgs; search engine manipulation and algorithmic biases that encourage inaccurate content; the non-transparent way personal data is used to target people by campaigns; the ad-tech system that encourages advertising dollars to flow to domains whose ownership is unclear and who have no editorial standards.
Just as important as what regulation focuses on is how it is framed in terms of language and political logic. Current proposals around disinformation are described in negative terms: they are all about stopping “harms” and mitigating “dangers.” When we frame regulation as a negative, the result can play into the hands of authoritarian regimes such as Russia, whose leadership is only too happy to quote censorious Western laws as it censors opposition at home. Authoritarian regimes will do as they do and often there’s no doing anything about it, but we should not be setting the terms of the debate on information in a way which a priori lead us toward a vision of the future of the internet which they desire.
Consider, once again, the case of the covert Russian social media campaigns in the United States. If one frames the argument against such operations in terms of “foreign meddling” one is playing into the Kremlin’s favorite theme of the need for “information sovereignty” and the end of the free flow of information across borders. If, on the other hand, one stresses that that problem with the Russian campaign was not that it was foreign, but that it was covert and full of deceptive behavior, then it becomes about the rights of people on the internet to receive accurate information. Promoting the rights of internet users is one thing the Kremlin is very uncomfortable with.
As David Kaye, the UN Rapporteur on Freedom of Speech and a professor at UC Irvine, told me:
A “rhetoric of danger” is exactly the kind of rhetoric adopted in authoritarian environments to restrict legitimate debate, and we in the democratic world risk giving cover to that.
[But] another way to conceptualize the impact and purpose of viral deception—assuming we can define it sufficiently narrowly—is as a tool to interfere with the individual’s right to information. Coordinated amplification has a strategic aim: make it harder for individuals to assess the veracity of information. It harms public debate, but it also interferes with the individual’s right to seek, receive, and impart information and ideas of all kinds.
Conceived this way, it seems to me that solutions could be aimed at enhancing individual access to information rather than merely protecting against public harm. This may be semantic at some level, but I also think it allows us to ask a different question: What can public institutions and private platforms do to empower individuals?
Kaye’s concise, elegant and necessary book—The Global Struggle to Govern the Internet—shows how tech company bosses have publicly stated they want human rights law to become the basis for how their platforms are run. I wonder what this will mean in practice. Online human rights courts that adjudicate on content and behavior in almost real time? How will we adapt our thinking about freedom of speech in an environment where censorship happens not only through shutting people up but through creating so much noise the truth is lost—and where old truisms such as “more speech is the remedy to disinformation” have been found to be not so necessarily true after all?