I was sitting in Parliament the other day dreaming of the Tempest and how to save democracy.
Outside the Committee room where I was slouched in reverie, beyond the thick green wallpaper with its paisley prints and the oak paneling with its heraldic signs, the House of Commons was convulsing over Brexit, unable to determine what exactly it was the country had wanted when it voted to leave the European Union in 2016. The words “sovereignty” and “the people’s will” were being thrown around like projectile vomit across the green benches of the Mother of All Parliaments, but no one could reach agreement on what they meant. Had the vote been about migration? The freedom to make trade deals with the rest of the world? Economic inequality? This confusion is the fault, in no little part, of how the campaign to Leave the European Union was run, with targeted, un-transparent dark ads on social media which tapped into utterly different grievances to get people to vote for Brexit. The head of the digital campaign for Vote Leave once told me you need 70 discrete messages for a population of 20 million. And when each little group votes for a different reason the other doesn’t know about, then how can one reach consensus on the meaning of the result?
In our little committee room a gang of MPs and “specialist advisors” (of which I was a very, very minor one) were working on the final drat of the Parliamentary Committee on Fake News and Disinformation, which has made some brave recommendations on how to regulate the internet in an era of bots, trolls, ISIS, information war/the war on information, dark ads, viral deception, and the rest. As the report put it:
We have always experienced propaganda and politically-aligned bias, which purports to be news, but this activity has taken on new forms and has been hugely magnified by information technology and the ubiquity of social media. . . .when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened.
The Parliamentary report had spent two years collecting testimony, including from whistle blowers at the disgraced election campaign company Cambridge Analytica that worked on Facebook data analysis and targeting for the the Trump 2016 campaign, and harrowing stories of how coordinated social media campaigns have successfully fueled hatred and ethnic cleansing in different corners of the world.
The image that I kept on coming back to in my mind during the Committee was how the person online is like Caliban in Shakespeare’s play The Tempest, who was tortured with spells and weird apparitions on an enchanted island by the magician Prospero. Like Caliban we are buffeted by information forces whose provenance we cannot see, made to fear and tremble by the workings of algorithms whose logic we are not allowed to fathom. We have no idea how the reality around us is being shaped, by whom and with what aims.
And this online life of semi-darkness means one can’t engage with the pseudo-reality around one as an equal. There are many definitions of “propaganda” but the one that has always meant the most to me focuses on the relationship between a citizen and information: Slanted, impassioned, even biased material is not so much the problem. The problem is when information is presented in a way that closes off the possibility of discussion with the source on equal terms. Lies do this, of course: If someone is purposefully hiding the real state of affairs from you, then how will you be able to debate them fairly? Certain types of thinking do this too: The more intense conspiracy theories undermine any possibility of reasoned argument by insisting those in the know have secret knowledge, or that all opposing speakers are actually emissaries of hidden powers.
Today, the very nature of behavior on the internet, and the architecture that upholds it, takes away what the philosopher Joseph Pieper once called “the right to our share in reality.”
So what would an empowered online citizen look like?
As the report suggested, the first step is for a person to be able to see how the reality around them is created—how the information meteorology is shaped. At the very least we should be able to understand why a political message is being targeted at us, by whom, and which bits of our personal data are being used to do so. One could go further. When one sees, for example, something online that looks organic, it should be clear if it is actually part of a coordinated campaign; or if something looks popular it should be clear if it has been amplified artificially; if an account is pretending to be something it is not we should know whether it is an avatar.
But while transparency of content is important, even more so is demystifying the workings of the algorithms. There has to be public input into how they work. If the algorithms are structured to promote violent extremism or to spread blatant disinformation, then there must at the very least be public input into how such content can be mitigated. More positively, we need to articulate what a public-spirited algorithm would look like. Back in the day patrician organization like the BBC decided what “public service” was (something or other about balance, tolerance, evidence-based discourse, public health, and so on). That top-down approach is no longer possible. So how can we move towards an internet architecture that supports the public good? Can we still agree on what public service is? Or can the process of trying to work out what it means today be the way in which agreement on its meaning is created?
And what about a trope even more tired than public service: “human rights?” How can one make the internet a place where freedom of speech, for example, is safeguarded, while protection from harassment and state-sponsored bullying is enforced? Could quick reaction internet courts be an answer, as I’ve heard at least one senior judge suggest?
As the countries in the thing once known as the West move towards regulating the internet, they need to approach the process so that it doesn’t become a reactive game of trying to catch random and ill-defined bits of “disinformation.” Frame the process of regulation in a positive way, and then instead of being the place where the “fabric of democracy is threatened,” the digital realm becomes where it is regenerated.