On October 24, Tim Cook, the CEO of Apple, gave an epochal speech to a conference of European data officials. Many outside the technology industry have warned that we are sleepwalking our way through a vast transformation of politics, economy and society. Our world is being remade around us by data and by algorithms. Tools and sensors that gather data on people’s behavior and dispositions are increasingly pervasive. Our cars secretly upload information about the music that we listen to, and where we listen to it. Our televisions, like the televisions in 1984, can listen as well as speak. Mountain of data is piled upon mountain. The obdurate heaps are sieved, winnowed and harvested by machine-learning algorithms, vast unthinking engines of calculation and classification, that allot us to categories that may redefine our lives while being incomprehensible to human beings, and ceaselessly strive to predict and even manipulate our actions. Together, these technologies are commonly known as “artificial intelligence” (AI)—and the implications for politics and economics are vast.1
Cook’s speech spoke eloquently to the problems of AI. What was remarkable was not what he said but that it was Cook, the leader of a major technology company, who was saying it. Cook told his audience that AI needed to be subordinated to human values and not allowed to displace human ingenuity and creativity. Of course, Cook was far better positioned to make such an argument than the CEO of Google or Facebook would have been. His company’s fortunes rely on selling physical products to consumers, instead of selling consumers to advertisers. Yet what he did was to bring battle over the relationship between technology and morality to the heart of Silicon Valley.
The speech was a bold and very consciously political move. The technology guru and intellectual entrepreneur Tim O’Reilly has observed, that if data is increasingly the central source of value within a corporation, then how that data is managed and monetized is likewise central to how value will be captured and distributed within and across national economic units. This is a vast transformation of our economy. To give some sense of its scale, 2014 was the first year in which the value of data exchanges internationally exceeded the value of traded goods.
Of course, how value is distributed across an economy has enormous political implications. The economic implications are portentous, to be sure, but the social consequences are equally far reaching. Data collection is reshaping individual privacy, while predictive and manipulative algorithms have profound implications for how we think about the autonomy, or agency, of the individual—both, again, quintessentially political questions.
To understand these politics, we need to think about the moral frameworks that they are embedded in. Google and Facebook’s model, in which individuals come to know themselves and be known through data, is not just driven by greed. It also exemplifies a deeply felt morality: both companies see themselves not just as businesses, but as evangelists bringing the true faith to the unredeemed. Cook’s speech represents a different and incompatible morality, in which technology is harnessed so that it enhances rather than transforms our capacity for moral judgments.
Yet it is not just the clashing moral visions of companies that vie with each other for supremacy. It is the clashing morals of national and supranational societies. Cook applauded the European data officials that he was addressing, telling them that Europe’s General Data Protection Regulation (GDPR) was leading the way, and that countries such as the United States should follow. Many in the United States—companies such as Google and Facebook, Republican and Democratic senators and members of Congress, think tanks, academics and public intellectuals—will sharply disagree, elevating instead what they see as a different and better understanding of technology and society. Understanding these varying moral visions—how they clash, reinforce each other or influence each other—will be key to understanding the politics of the 21st century.
We propose here a moral economy framework for thinking about how governments and societies are choosing to address the political, economic, and social implications of data and data-driven artificial intelligence. By “moral economy” we mean the combination of values and norms that underpin how state and private sector actors work together to allocate, produce, and distribute various goods and services. Every set of economic arrangements implicitly embeds and promotes some set of moral suppositions about how the participants in that economy should relate to one another; they define, in other words, their rights and responsibilities.
The moral economy of free-market capitalism, for example, is rooted in the belief that markets allow people to make their own choices and thus promote liberty, and also that markets encourage the efficient use of resources, thus increasing aggregate prosperity; on the flip side, free-market capitalism is less concerned with issues of equality or the promotion of community. Socialist economies have their own underpinning moral systems, as do those characterized by feudalism, and so forth. The point is not that some economies are more or less moral than others; it is rather that all economies have a specific underlying morality. This is also true for emerging economies of data.
Emerging data economies enable new moral economies—values that people are supposed to hold, and standards that are supposed to shape their behaviors. Sometimes these values and standards are so universally accepted as to be invisible; other times they are deeply contested. When they are contested they become very visible indeed. Either way, they shape the boundaries of what people think is acceptable and possible.
A given new technology does not automatically result in a specific moral economy, but can enable many different possible moral economies. The outcome depends on the existing political, economic, and social systems with which it intersects, and how individual and collective actors interpret and guide the collision. Furthermore, just as technology reshapes moral economies, the moral economy can reshape how technology develops, creating pressures that push toward some paths of development and away from others. Different societies, with different moral economies, can furthermore interact with each other, using political tools to try to reshape how technologies are used, and seeing their own moral economies change as technology leaks across political, social, and moral borders. As tech insider Kai-Fu Lee says, “It’s up to each country to make its own decisions on how to balance personal privacy in public data. There’s no right answer.”
While prediction about technology development is a hazardous business, we believe that the next two decades are likely to be shaped by the interactions within and between three distinct approaches to the moral economy of data: in the United States, China, and the European Union.2 While all countries and cultures will have their reactions to data and their uses, these three matter most not only because these are the primary sites where the data-driven technologies are being developed, but also because each of these three is developing a distinct moral economy of data. Clashes among these three moral economies over how these technologies should be deployed are inevitable.3
The U.S. moral economy of data is perhaps the best understood of the three.4 On the one hand, the U.S. moral economy of data treats personal information as an economic commodity that corporations are free to treat as largely distinct from the individual from which it is drawn. Data can be agglomerated, packaged, traded, and utilized in ways that are similar to, but in some ways more sophisticated than, the bundling of mortgages and other financial instruments. From this approach emerges a multitude of classifications, which not only shape the advertisements people are exposed to, but increasingly the market chances and opportunities they have. One group of people will see attractive offers of finance; another group, predatory loans.
On the other hand, there is a sharp difference between the liberality with which the U.S. moral economy of data treats market exchange, and the skepticism with which it treats the state’s use of personal data. While the surveillance capabilities of the U.S. national security state are enormous and have increased dramatically, these are tolerated only insofar as they are focused on targets outside the United States. Areas where external and internal information overlap (as necessarily happens, given the nature of bulk collection) are the focus of sharp-edged suspicion and legal contestation. In both its permissiveness toward corporations and its suspicion of the state, the U.S. moral economy of data reflects the libertarian taproots of American internet culture.
The European moral economy of data is less well understood. Instead of treating personal data as a commodity “in the wild” that corporations can harvest and sell, it treats it as an aspect of individual human dignity, which is in principle inseparable from the human being with which it is associated. On May 25, the European Commission introduced the GDPR, which epitomizes this approach.
The GDPR approach has implications for both the economy (where data can only travel with permission of the individual to which the data refers, and with a strong associated set of rights), and the state (where security-related uses of data need to be limited and purpose-specific). Over a period of two decades, this understanding of the moral economy of data was embattled and subdued, thanks both to the market power of e-commerce companies (and the willingness of the U.S. state to support them) and the desire of security agencies within the European Union to have untrammeled access to the information they believed they needed to do their job. Now, as the emergence of the GDPR shows, both sets of constraints have weakened, thanks both to constitutional and legal changes in the European Union, and the enthusiasm of judges, regulators, and non-governmental organizations to protect and, if possible, extend the European moral economy worldwide.
The Chinese moral economy of data differs again. In contrast to both the United States and European Union, no very strong distinction exists between the state and commercial sectors, which tend to blur into each other, especially where large companies are concerned. There is furthermore little concern with formal rights of the kind that shape both the U.S. approach to government use of data, and the EU’s general approach. Instead, a strong emphasis on the value of data both for profit and social stability takes pride of place. Both the private sector and the government (often hand-in-glove) are gathering enormous amounts of data, on everything from online behavior to how people walk down the street, with relatively little oversight and control.
The development of the so-called Social Credit System, which is meant to be a kind of FICO score for your entire life, represents the direction all this is headed. This is not, as of now at least, creating the Panoptic Leviathan suggested by some overwrought Western commentary (usually written from the point of view of the American moral economy of data). Many of these schemes are not additive but instead represent the initiatives of specific parts of the state or commercial sector. While the state is powerful and well-staffed, exactly because it interpenetrates society, it is interpenetrated by society’s political and economic conflicts as well. This gives rise to a new moral economy that combines (a) easy access to data (including forms of data that are controversial in democracies) with (b) economic dynamism and aggressive willingness to explore the entire possibility space of profitable opportunities and (c) the general concern of the Chinese state with political stability and the continued rule of the Communist Party.
Now, these three distinct moral economy positions are themselves evolving. In particular, two of the actors have evolved in historically consequential ways over the last 10-15 years, namely China and the European Union.
Since the emergence of the data economy at the start of the 21st century, the United States has remained in the X-No/Y-Yes camp. Though there has been serial outrage over internet companies’ failure to safeguard customer data, and much ire directed at social media companies over their role in stoking political divisions, as of now there has been no concerted attack on the right of internet companies to exploit the data they collect on their users. This may change in the future. Some people on the left attack large internet firms as a new expression of corporate power. Think tank intellectuals associated with Senator Elizabeth Warren are beginning to articulate an antitrust case against the data leviathans. Yet none of this has yet translated into a coherent and deep rooted movement for reform.
By contrast, Europe and China a dozen or so years ago both began in the diametrically opposite quadrant from the United States—that is, trusting the government with data but suspicious of corporate control. The reasons for their suspicion of corporations and support for their governments were quite different, however. The chariness of continental Europeans in particular reflected a more aggressive regulatory approach to companies and a suspicion of untethered profit motives. Often Europeans were willing to support counter-terrorism programs after violent attacks, even when they were deplored by privacy officials. For the Chinese, it reflected the twin artifacts of the unapologetic political hegemony of the CCP and concomitant suspicion of foreign companies gathering data about its citizens.
These different reasons for being in the X-Yes/Y-No camp in turn reflect how each has evolved. In the European Union, the Europeans have become even more suspicious of companies, and even more committed to regulatory management, all the while becoming increasingly skeptical of counter-terrorism arguments favoring governmental access to these data. This was reinforced by the post-Snowden revelations perception—only partly accurate—that anti-terrorist surveillance was primarily being pushed by the United States. The past two years have seen the formation of a loose consensus across regulators and politicians that antitrust, privacy law, and constraints on the sharing of information should work in harness to protect this moral vision; meanwhile, the ability of security agencies to use personal information has been limited by European court judgments on data retention. The result is that the European Union has pretty much migrated to the No/No camp—suspicious of both governmental and corporate control over personal data. The GDPR and looming anti-trust actions are the fruits of this shift.
On the other hand, the Chinese dealt with their position by essentially replacing the American internet companies with homegrown varietals, all of which are firmly subordinate to, if not in direct partnership with, the regime; as long as the state controls the companies, the state is happy to give the companies free rein to innovate. The result is that China is now in the Yes/Yes camp—suspicious neither of governmental nor of (what is close to indistinguishable now that the companies are all Chinese) corporate control over personal data. The fruit of this is the Social Credit system, which is openly and unapologetically being implemented by companies in partnership with regional and municipal authorities to improve social service provisioning—including the social service of monitoring and responding to citizen unhappiness, and of course in tandem “managing disruptive behavior.”
These transitions have created an interesting geopolitical inflection point. Whereas a dozen years ago, both the Chinese and the Europeans were in some ways aligned in their suspicions of the United States, which had made diametrically opposite “moral economy” decisions about data privacy from what they both preferred (albeit for different reasons), the evolution of their two positions have gone in opposite directions, with the result that the Chinese and the Europeans are now more sharply at odds on this topic than either is with the United States.
Indeed, the advent of Donald Trump has reinforced this dynamic. On the one hand, Trump has stopped U.S. jaw-jawing at the Chinese about human rights (he harasses them on trade and currency issues, but not on data privacy or its social-control deployment), while the Europeans have become far more critical of China’s data policies. Interestingly, the European Union is probably at this point the most aggressive of the three players in attempting to extend outward its own moral economy of data. As described by one European participant in these debates:
Europe really wants to take its role seriously and become the global gold standard setter and also the global regulator for these issues on monopoly. And in a way, to find a European way. The Silicon Valley or Washington approach is that they do what they want and then move fast and break things and then see what happens, and if they make money it’s fine. The Chinese approach, on the other side, they basically control everything, including the content, and have the social rating system and stuff like that. We don’t want that. We are having much broader support for a European approach, that tries to regulate technology, to regulate technology companies, to regulate the platform and what have you, based on our European values, on privacy, on freedom of information and the rule of law.5
Tim Cook would like to harness this assertiveness for his own purposes. “We at Apple believe that privacy is a fundamental human right,” he declared, echoing the idiom of the European moral economy of data. But his elaboration on the point still assumed that any change in the United States would build on the existing U.S. model—the right of privacy articulated in Justice Brandeis’s famous dissent, rather than the encompassing notion of privacy favored by the Europeans:
We at Apple are in full support of a comprehensive federal privacy law in the United States. There, and everywhere, it should be rooted in four essential rights: First, the right to have personal data minimized. Companies should challenge themselves to de-identify customer data—or not to collect it in the first place. Second, the right to knowledge. Users should always know what data is being collected and what it is being collected for. This is the only way to empower users to decide what collection is legitimate and what isn’t. Anything less is a sham. Third, the right to access. Companies should recognize that data belongs to users, and we should all make it easy for users to get a copy of, correct, and delete their personal data. And fourth, the right to security. Security is foundational to trust and all other privacy rights.
In other words, while Cook was willing to move some ways toward acknowledging the European view that data integrity should be treated as a “human right,” he is still hedging by insisting that data be treated as property, and therefore fundamentally commodifiable.
It is important to emphasize that none of these moral economies is internally monolithic, any more than any moral economy ever completely ossifies. Each combines moral verities that are more or less unchallenged with areas of sharp contention. Furthermore, the coexistence of these moral economies in a globalized and highly interdependent world means that internal fissures are likely to interact with external pressures in complex ways. What happens if, for example, machine-learning techniques based on large-scale individual surveillance data percolate from China to the United States? How might U.S. platform companies change their business model (with associated implications for the U.S. moral economy of data) if they are obliged by European courts to provide far stronger data rights to citizens? How might China respond if the United States and the European Union work more closely together to try to bind international data exchange to individual liberties against the state? And so on. We anticipate that, as with Tim Cook, there will be efforts to translate preferred actions from the idiom of one moral economy to another, but that, as with all translations, substantive differences will remain between the different frameworks. A good translation is not a transposition; it is a re-articulation of some of the animating spirit of the original work in an alien vernacular with different referents.
To get our heads around what all these divergent and evolving moral economies may mean, much further inquiry needs to be done. First, we need much better maps of the different moral economies of data, their implications, their agreed-on verities, and their areas of conflict, as has already been done for the United States. This should also include looking at the moral economies of data beyond “the big three”: India, Japan, and even Israel will be important technological players in the development of data-intensive information systems, and are likely to have differing views of how privacy, security, state interests, and corporate independence should be balanced in the development of data-intensive applications.
Second, we need to work from these improved understandings to assess the areas in which they respectively reinforce and impede the abilities of different actors in the state, business, and civil society to achieve their objectives (as well, often, as implicitly shaping those objectives).
Third, we need to chart out the reciprocal ways in which these moral economies shape different trajectories of technological development, respectively favoring certain lines of research while disfavoring others. Fourth and finally, we need to examine the interaction between these three different understandings of the moral economy in an interdependent world, where none can prevail entirely, and each is obliged to press back against or to accommodate moral imperatives that do not originate internally.
1. Definitions of artificial intelligence, machine learning, deep learning, and even data itself reside in highly contested terrain. At the least, a consensus is forming around the idea that there are five stages of AI. Our definition here does not attempt to parse these distinctions. It refers broadly to the emerging suite of data-driven computing technologies that impinge on various cultural frameworks. The standard textbook here is Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, which has sold hundreds of thousands of copies.
2.For a cognate analysis, see Diane Francis, “Three Glimpses of the Future,” The American Interest (June 2018): https://www.the-american-interest.com/2018/06/20/three-glimpses-of-the-future/.
3.Each of these approaches is internally contested, yet in each the contestation involves a distinct set of moral orientations and associated forms of political, economic and social organization. These three moral economies are moreover organized according to distinct and sometimes contradictory logics, which means that there is likely to be continued contestation across as well as between them.
4.A wide body of scholarship and writing addresses the U.S. moral economy of data, but it is most directly addressed in Marion Fourcade and Kieran Healy, “Seeing like a market,” Socio-Economic Review 15:1 (2016), pp. 9-29.
5.Telephone interview with Ralf Bendrath, former senior policy adviser for Jan Philipp Albrecht, June 2018, quoted in Henry Farrell and Abraham L. Newman, Of Privacy and Power: The Transatlantic Struggle over Freedom and Security, forthcoming Princeton University Press, 2019, p. 174-5.
You must be logged in to post a comment.