At what point will reality emerge? I don’t mean anything fancy by “reality,” just that moment when you realize there’s a world beyond yourself to contend with; where the stream of bullshit hits something hard and undeniable; where words have some sort of common meaning in a public sphere and not just whatever you fancy them to be.
The Mueller indictment of the Russian Internet Agency’s internet influence campaign in the U.S. captures the drama of unreality and reality which has come to define our hysterical times. On the one hand, the text describes the mechanics of how the Agency used fake online identities to create fake online campaigns geared to help a candidate who has no notion of truth. On the other hand there was the legal language of the indictment, a language where words count, where they can lead to the relentless realness of prison. When the two clashed it made for a delicious read, a Pynchonesque yarn retold in bullet point legalese.
On or about October 16, 2016, Defendants and their co-conspirators used the ORGANIZATION-controlled Instagram account “Woke Blacks” to post the following message: “[A] particular hype and hatred for Trump is misleading the people and forcing Blacks to vote Killary. We cannot resort to the lesser of two devils. Then we’d surely be better off without voting AT ALL.”…. Defendants and their co-conspirators also used false U.S. personas to ask real U.S. persons to participate in the “Florida Goes Trump” rallies. Defendants and their co-conspirators asked certain of these individuals to perform tasks at the rallies. For example, Defendants and their co-conspirators asked one U.S. person to build a cage on a flatbed truck and another U.S. person to wear a costume portraying Clinton in a prison uniform.
But though the indictment itself is an emblematic text of the challenge we face, it does little to resolve it. Jonathan Morgan, a tech guru type, put it best:
https://twitter.com/jonathonmorgan/status/964921031189319682
In the Cold War, Soviet disinformation campaigns needed to cleverly, patiently infiltrate Western media with their agents, in order to craftily spread elaborate hoaxes. Now they just use Facebook. In “Digital Deceit”, a new paper from the New American Foundation, Ben Scott and Dipayan Ghosh argue that disinformation campaigns need to be seen not as some sort of perverse use of social media, but as its logical outcome. Behavioral data collection, which tracks users across the internet without them ever being particularly aware of it allows advertisers to know more about potential clients. Similarly, it allows a clever disinformation operation to understand its audience as never before. Segmentation and precision targeting allow advertisers to instantly know which specific audiences react to which messages. They also allow a disinformation campaign to constantly hone its aim.
“Russian targeting efforts,” conclude Scott and Ghosh, “took advantage of the basic tools of today’s information markets that are designed to deliver targeted persuasive messages to tens of millions of people at low cost and with little transparency. Moreover, they benefited from the fact that there were many other domestic political actors doing similar things—running paid and unpaid content on social media to promote salacious, divisive, or emotionally manipulative political messages. Once AI-driven audience targeting has locked onto a successful combination of demographics, messages, and attention-spending user behavior, it will naturally steer all similar content into the same pathways. These platform economics are designed to help advertising succeed.”
This last point means that we have to adapt our thinking from obsessing whether “foreign” campaigns change the “domestic” argument, or whether the “domestic” is more important than the “foreign.” Instead we have to start thinking holistically about a media system which actively engenders disinformation, and where all campaigns smooth the pathways for each other. Disinformation won’t be curbed until the very model which produces it is altered. The next time “Russia” could easily launch a “domestic” U.S. campaign by using actors registered in the U.S. and funded by friendly U.S. nationals. This is already happening in Europe, where in countries such as the Czech Republic and Italy local actors with close financial or political relations to the Kremlin launch disinformation campaigns which push Kremlin lines, but could in no way be described as “foreign.” Calling them “Trojan Horses” assigns far too much causal power to the Kremlin. They’re more like natural allies. Indeed I wonder if there might be something nostalgically comforting in defining campaigns in terms of “domestic” and “foreign”: allowing one to either assign all blame to a foreign party, or comfort oneself that no foreign party could or should ever influence your “home.” Disinformation, however, doesn’t give a shit about geography.
When I helped run a project monitoring the recent German election, I found disinformation campaigns being coordinated across borders by both U.S. and European alt-Rightists and neo-Nazis, by state and non-state Russian actors, Falun Gong ‘alternative media’ operations and mere enthusiasts. Motivations ranged from political to commercial, often intermingling. Every one of these campaigns reinforced the others, and they all helped the far Right AfD party by focusing on anti-migrant narratives.1
The notion that our current media model creates a demand for disinformation has also been extensively explored by Walter Quattrociocci of the University of Venice. In a study on the “Trends of Narratives in the Age of Misinformation,” Quattrociocchi analyzed 54 million comments over 4 years in various Facebook groups and found that the “cognitive patterns in echo chambers tend towards polarization.” It’s this polarization which creates the need for fake news, rumors, and conspiracies to confirm biases. It really doesn’t matter if stories come from dodgy sources: you’re not looking to win an argument in a public space to a neutral audience; you just want to get the most attention possible from like-minded people. Indeed the more extreme position you take, the better. “Online dynamics induce distortion,” concludes Quattrociocchi. “Fake news” is not the cause of the denigration of democratic debate; it’s a symptom of the nature of our new media landscape.
Of course there have always been attempts to manipulate us, whether through advertising or political agitprop. But there’s been some sort of qualitative shift in the quantity of unreality. Before we might see an ad on television, or a billboard, which would have to cleverly reach into our unconscious, to lie there while we went about and interacted in the real world, and then try to influence our behavior when we went into a shop to buy something or express a political opinion. Now the Facebook algorithm is constantly moving anything you might not like out of your way, calculating how long you pause your scroll on every story, filtering out the people or the subjects you scroll past, bringing in more of what it’s worked out you like to tarry on, and then feeding that information back to companies, PR firms, Trumps and Putins—who in return can then easily feed us the image of the world we want. We don’t even need to enter the real world to act. We buy things while online. We make our political statements there too.
Recently I was told about how Far Right groups have started recruiting activists on gaming forums. They motivate them to join groups which use gaming sites to plan online social media campaigns: one moment you’re planning how to win on World of Warcraft, the next you’re using the same infrastructure to plan how to spam hashtags and organize bot-nets to smear politicians. So people dwelling in the virtual world of gaming are lured into online campaigns which use the language of gaming, moving seamlessly from one unreality to another. Perhaps there’s little wonder that the politics that has emerged from it leaves one pinching oneself to check whether it’s all real.
Ghosh and Scott have a bunch of really useful suggestions at the end of their paper about what to do about “digital deceit.” They cautiously propose to introduce regulation—the undeniably “real” language of Mueller—into the spongy ether of social media: regulation that gives people more power and knowledge about how their data is gathered and used, and regulation about transparency for political ads.
This is good, but what does one do with the desire to be misinformed in the first place? One of the most striking videos in the aftermath of the Mueller indictment was the interview of the woman who had been duped by Russian sock puppets but refused to admit it. Quattrociocchi is now at the forefront of thinking about how one can use data science to understand the roots of why people desire disinformation. He is founding a “social media observatory” at the University of Venice, which will analyse how, why, and with what emotion people consume false stories, in order to then find ways to deliver accurate information they are prepared to consume. What sort of sources might they trust? How should stories be framed? Do they prefer visual or narrative forms?
It seems that the only way to cure hyper-targeted deception is a more ethical hyper-targeting. There’s no return of the old, Enlightenment, allegedly evidence-based and supposedly rational “public sphere,” with its related geography of “domestic,” “foreign,” and “global” spaces. Instead there are permutations, some using language more accepting, others more “polarized” and dehumanizing, of that semi-trance like state you enter into when online, where the definitions of public/private, foreign/domestic, and real/unreal fall away.
1. Something similar can be said of corruption. As the journalist Oliver Bullough has pointed out, in today’s global corruption schemes every link is as important as the next to move the dirty money round, and the whole is made possible through an economic model built around tax havens.