Last week, Facebook, Twitter and Google sent their lawyers to testify before the Senate Judiciary Subcommittee and the Senate and House Intelligence Committees on their platforms’ role in Russian interference in the 2016 U.S. elections. But while the testimonies brought new revelations about the extent and nature of Russian information operations, the firms failed to offer real solutions, instead offering up proposals that amount to slapping band-aids on gaping wounds. It’s not surprising; meaningfully addressing foreign interference on social media would require a rethink of these firms’ revenue models. At least for now, political pressure has apparently not been strong enough to force such a reckoning.
Several lessons emerged from the hearings.
Chaos is cheap. Ahead of the hearing, Facebook released approximately 3,000 ads that it said were paid for by Russian sources, including the St. Petersburg troll farm known as the Internet Research Agency (IRA). Lawmakers made a portion of these ads public on Wednesday. The purpose of the ads, displayed on posters during the committee hearings, was clear: push on hot-button issues, amplify social and political divisions, and encourage visible, public protest. The content was non-ideological: some ads supported the Black Lives Matter movement, others promoted Southern pride with confederate symbols; some were pro-gun, others against it. The ads were not randomly released: they micro-targeted based on geography, users’ interests, and cross-referenced with users other “likes.” The ads show that the Russians launched a sophisticated information operation that required a nuanced understanding of American society and culture. And the IRA spent only $46,000 on the ads in the lead up to the elections.
Algorithms are easily manipulated. The Russians were able to reach 150 million Facebook and Instagram users in their influence campaign with only 80,000 posts. Twitter identified over 2,700 Russian linked troll accounts and 36,000 automated (bot) accounts, which together posted over 1.4 million “election related tweets.” These tweets received 288 million views—and this was only over a two-month period from September to November 2016. What these remarkable numbers tell us is that with clickbait headlines and a little bit of cash, Twitter’s and Facebook’s algorithms can be leveraged to devastating effect. Social media firms have consistently insisted that they are neutral content platforms, not media organizations or content publishers. The algorithm, the tech titans tell us, cannot be biased; it is merely automated to prioritize content that users like. But it is precisely this type of automation, lacking human editorial oversight, that leaves the platforms open to sinister manipulation. Claims of official neutrality are simply not cutting it with the public, or lawmakers, any longer.
There’s a lot more we don’t know. The testimonies from all three firms focused primarily on accounts or content that could be linked to the IRA, the Russian troll farm. The firms used various indicators for identifying Russian ads, including IP addresses and whether the payments were made in rubles, but their overall methodology remains murky. Case in point: Twitter’s flip-flopping on the number of Russia-linked accounts. In September, Twitter said that it had identified only 201 such accounts, but now the number has increased to 2,752. According to congressional testimony, Twitter identified 36,746 Russia-linked bot accounts that were “tweeting election-related content” in the lead-up to the elections. Twitter went out of its way to suggest these efforts were insignificant, testifying that these “represent approximately one one-hundredth of a percent (0.012%) of the total accounts on Twitter at the time.” The specific qualifiers in this statement—Russia-linked and election-related—should raise additional questions about how exactly Twitter chose these specific accounts. As the diversity of Facebook ads suggest, the majority of the Russian ads could not be easily identified as “election-related content” in support of specific candidates. This also brings up the bigger question of what proportion of all Twitter accounts are bots. Twitter did not directly answer this question, leaving researchers to infer, based on incomplete data, that bot account could represent as much as twenty percent of Twitter’s overall users in some countries.
Google was also cagey in its testimony, focusing only on one of its products: YouTube videos. Google said that Kremlin linked accounts posted over 1,100 YouTube videos on similarly divisive issues that were viewed 309,000 times. Compared to Facebook and Twitter, this number suggests the impact was low, given that people watch a billion hours of YouTube videos per day. But like their colleagues at Twitter, Google’s report seemed to leave out potentially important analysis. YouTube is hardly the only vulnerable vector. The company’s own search algorithms are also vulnerable to manipulation: illegitimate “news” sites have often popped up at the top of Google searches, especially under the “News” category.
Band-Aids
Directly after the hearings, Facebook unveiled an action plan against foreign interference. It boils down to Facebook agreeing to cooperate with Congress in the future, and in the meantime hiring more people to review potentially fake accounts and tweak its algorithm to reduce the appearance of clickbait. The plan also places most of the burden on users to seek out information about ad funding sources by clicking though posts to learn more about the sponsor or taking the initiative to research whether a news site is legitimate. This may sound like a reasonable proposal, but on average, Facebook users “click through” an ad only .9 percent of the time, so they are unlikely to educate themselves.
Twitter also announced that it has blocked RT and Sputnik—Russia’s state funded media outlets—from buying ads, and that it is working to more aggressively shut down automated accounts and identify fake accounts. RT and Sputnik will however continue to be able to tweet on the network.
Google had not yet made a public announcement at time of writing.
This will not be enough. Facebook, for example, collects reams of users’ personal information and preferences, which it then uses to sell advertising opportunities to other companies. The targeting tools that the Russians used in their ads were provided directly by Facebook itself. Political consultancies, such as Cambridge Analytica, mined Facebook’s “likes” to micro-target voters in the United States during the elections and in the United Kingdom during the Brexit referendum. Rather than recognizing the threats posed by the weaponization of personal information early on, Facebook expanded users’ ability to emotionally react to posts with “emoticons” in 2016. This feature, if mined by a savvy malicious actor, opens a potential Pandora’s box for emotional manipulation. Now malicious actors have an even deeper trove of information to access, knowing what content makes people happy or sad, or better yet, angry.
It’s of course no surprise that Facebook, Twitter, and Google are reluctant to make the kinds of changes that could make a difference. At the end of the day, these tech firms depend on ad dollars for their bottom line, which in turn drives them towards more mining of their users’ personal information—not less. Furthermore, their pitch to advertisers depends on the premise that all that personal information is attached to real people. Getting to the bottom of anonymous accounts could be bad for business. Twitter is particularly vulnerable1 if it turns out that a significant number of its users are bots or fakes. Ad revenues depend on the premise that these eyeballs are real.
The digital revolution has completely transformed human interaction. Our Silicon Valley giants, the authors of this great transformation, have been allowed to operate freely without much government regulation or oversight. Russia’s disinformation operations have revealed just how easily these influential platforms can be manipulated to the detriment of our democracy. These companies have to get serious about proactively curtailing this manipulation on their own. But if in the process they fail to re-examine and tweak their business models, the harder hand of governmental regulation is probably inevitable.
1 Twitter recently admitted that it had been overestimating its user numbers for three years. The company has also been struggling to increase new user growth and ad revenue with negative effects on its stock price.