An unwieldy but consequential debate about how private sector platforms can combat political disinformation without undermining free expression has been seething since the 2016 U.S. Presidential election. Although foreign intervention is still a serious threat, platform treatment of domestic political disinformation has become a new flashpoint in this highly politicized debate.
Political disinformation is a notoriously slippery category. Generally speaking, platforms handle it by restricting inauthentic coordinated behavior or artificial amplification, rather than on a content basis. That, anyway, has been the primary modus operandi for dealing with foreign disinformation. But some political disinformation is posted by authentic accounts and organically amplified. In some cases, domestic political speech can veer into incitement to hatred or violence. In others, it includes false or misleading information about voting procedures intended to suppress voter turn-out or generate distrust about election outcomes. Political disinformation also can take the form of manipulated text or video of candidates intended to shape political narratives or depress voter enthusiasm.
At this juncture, it is not hard to imagine that platforms’ own handling of domestic disinformation will be raised as a reason to doubt the legitimacy of the November election, regardless of the outcome. To defuse this potential claim, the public needs a clearer picture of what would constitute a principled platform approach to domestic political disinformation, before we vote. Part of the solution lies in articulating a more nuanced view of free expression.
At its heart, the debate revolves around several conceptual questions: What does it mean when private-sector platforms commit themselves to protect free expression? Whose speech interests are they protecting? Should platforms have the power, or even the responsibility, to restrict domestic political disinformation—which is not illegal—in order to protect the integrity of elections? Alternatively, should platforms be expected to adhere to First Amendment limits on government power to restrict speech, because they function as “the public square”?
Republicans and Democrats have taken directly opposing positions on whether platforms should be prohibited from or required to take down more speech than the government could under the First Amendment. President Trump’s May 28th Executive Order on Preventing Online Censorship expressed the sense of many Republicans that private-sector platforms function as a “21st century equivalent of the public square,” and therefore should not have the power to “handpick the speech that Americans may access and convey on the internet.” The Executive Order cited Twitter’s recent decision to put labels on several tweets (by @realDonaldTrump) as an example of “selective . . . political bias.”
On the other side of the aisle, several Democrats, including the Party’s presumptive nominee for president and other former presidential candidates, have threatened platform regulation for the opposite reason: failure to take down disinformation intended to discourage voting and unwillingness to remove manipulated videos of candidates that unfairly shape public opinion. In an open letter to Mark Zuckerberg on June 11, former Vice President Biden called on Facebook to more proactively stem the tide of “disinformation that undermines our elections” and to find meaningful ways to “use its platform to improve American democracy.”
Ironically, members of both parties have threatened the “revocation” of Communications Decency Act Section 230 (the provision of U.S. law that protects free expression online by establishing platform immunity from liability for user-generated speech), but for antithetical reasons. Revocation of this provision would not solve the purported problems cited by either side, for the simple reason that taking down or leaving up political disinformation would not provide an underlying basis for liability even if platforms lost immunity. Threats to “revoke 230” do not instill confidence that our government officials have a handle on how to protect free expression, combat disinformation, or regulate platforms.
On the private sector front, executives also seem to be confounded by their commitments to free expression. Facebook and Twitter have always moderated users’ speech according to community guidelines or rules that restrict speech beyond what the First Amendment would allow. That said, Facebook has exempted the speech of political officials from many of its community guidelines. Mark Zuckerberg has reiterated his view on several occasions that for Facebook to restrict, label, or fact-check any speech by political leaders, whether organic or paid, would violate Facebook’s core commitment to free expression. There have been two exceptions to this exemption from the rules for posts by elected officials: voter suppression and incitement to violence. But here, Facebook has been roundly criticized for inconsistent application of those rules to certain elected officials. The most notable recent case was when a post by President Trump that included the phrase, “when the looting starts, the shooting starts,” was assessed—by Mark Zuckerberg personally—not to violate Facebook’s policy against inciting violence.
Twitter also has a “public interest exception” for tweets by elected officials. But when recently confronted with several tweets by @realDonaldTrump that violated its rules against voter suppression and glorifying violence, Twitter took a hybrid position between removal and exempting the tweets from the rules: It placed “fact check” labels on tweets claiming mail-in ballots lead to fraud; and placed a “warning” label on a tweet that said, “when the looting starts, the shooting starts.” Placing labels on tweets does not constitute censorship. In fact, the tweets are still up. But these labels do constitute a manifestation of Twitter’s own speech.
Notably, after originally criticizing Twitter on free expression grounds for its decision to label President Trump’s tweets, Facebook reversed course, just as an advertisers’ boycott began to take hold. On June 26, Mark Zuckerberg announced that it too would label posts by political leaders that violate community guidelines. This episode provides an indication of how daunting it has been for Facebook to articulate what its commitment to free expression actually entails.
Where does this assortment of positions leave the public? Notwithstanding this convoluted debate, a June 16th poll released by Gallup and the Knight Foundation found that 2/3 of American respondents think that social media platforms should allow expression, even if it is offensive, but at the same time, 81 percent believe that intentionally-misleading claims about elections or other political issues should be removed. These sentiments may seem at odds, but in fact they begin to point us in the right policy direction.
The central problem in this debate is that private sector executives and public policymakers hold inadequately nuanced views of what free expression on private-sector platforms actually entails. A more sophisticated understanding would include an appreciation of the interplay between the free expression rights of the platforms themselves, along with the free expression rights of platform users, as well as the free expression interests of the larger societies in which platforms operate.
To help platforms and government policymakers better understand platform rights and responsibilities when it comes to free expression and democracy, here are six suggestions to re-frame the debate:
- Freedom of expression for platform users entails more than the right to speak. It also involves the freedom to seek and receive information, as well as the freedom to form opinions. If the ability of platform users to form political opinions is distorted by rampant political disinformation, an important dimension of their free expression rights will be undermined. Commitment to users’ free expression does not require that platforms let disinformation flow, but instead justifies efforts to combat it.
Platforms should recognize that it is within their own right to free expression to more explicitly commit themselves to protecting democracy and democratic participation, as an expression of their own values. The right to vote and participate in democracy is a fundamental right of individuals that platforms should embrace, just as they embrace free expression.
Platform rules are a manifestation of the free expression of platforms themselves, as are their powers to promote, demote, label, curate, and rank content. Platforms should not shrink from exercising these powers in the public interest, by retreating into the mistaken presumption that they are bound by the First Amendment when governing speech on their platforms. But platform powers should be exercised responsibly and much more transparently, given the substantial impacts platform discourse has on societies.
Platforms should take an expansive view of the free expression rights of users, but also consider the impact of their platforms on the expression interests of citizens and the larger societies in which they operate. Expression of the democratic will of the people in an election is the most important manifestation of the expression of the interests of citizens. If platform policies allow political disinformation to suppress democratic participation or warp civic discourse to the extent that it changes election outcomes, both the free expression and democratic will of the people will be thwarted. As part of their commitment to free expression, private sector companies should protect citizens’ ability to express their will in free and fair elections.
Platforms should not selectively bind themselves to the U.S. First Amendment, especially if only for the speech of elected officials. The complex interplay of expression of interests on platforms is not the same as free speech in an American public square. In fact, global platforms should be guided by internationally applicable human rights principles, including Articles 19 and 25 of the International Covenant on Civil and Political Rights. Article 19 protects the right to form opinions, seek and receive information, and to impart ideas as interrelated aspects of free expression. Article 25 protects the right to democratic participation. Private sector platforms would do well to embrace the responsibility to respect these human rights.
In terms of guidance to the government, the key point is that policymakers should not discourage the responsible exercise of platform rule-making authority in support of democracy. The government should develop transparency and accountability regimes that allow users and the public to assess platform consistency in the application of their own rules and guidelines, as well as the less visible aspects of content promotion and demotion policies. Needless to say, elected officials should not threaten retaliation when platforms choose to protect the right to democratic engagement, especially in the form of threats to revoke platform immunity from liability, which is a lynchpin of global free expression.
The bottom line is that private-sector platforms should acknowledge and embrace their own free expression right to combat political disinformation and to protect democracy. While this responsibility may be onerous, it is not prohibited by free expression principles. Platforms already set parameters for regular user’s speech within their own communities and the same power can be exercised over the speech of elected officials. Rather than hide behind a commitment to neutrality when it comes to political speech, platforms must acknowledge the power they have in governing their platforms, articulate their rules clearly, and have the guts to be judged accordingly by users and the public.