Commerce Secretary Wilbur Ross’s decision last month to add a question to the 2020 Census asking about citizenship status has predictably led to a firestorm of controversy. Census professionals, former Census directors, and progressive activists have responded furiously since the decision was announced, arguing that the added question will scare undocumented residents from filling out the census forms, leading to an undercount of the population in immigrant-rich communities.
As Ari Berman recently put it, “The census is America’s largest civic event, the only one that involves everyone in the country, young and old, citizen and noncitizen, rich and poor.” The core purpose of the Census is to collect information about exactly how many people there are in different parts of the country, down to a block-by-block level of detail. The scale of the undertaking is hard to exaggerate: At its peak activity level in 2010, the Commerce Department employed 635,000 people to conduct the Census. It is such a significant effort that it noticeably affects national employment statistics. In 2010, the Census cost $12.3 billion, a figure that will increase in 2020, assuming it is performed properly.1
The Census is of paramount importance not only because it is used to guide congressional redistricting—guaranteeing fair and equal representation in Congress, a basic index of democracy—but also because more than $600 billion in Federal funding is allocated based on where people live as determined by the Census. Compromising the integrity of the Census thus threatens the core of American democratic practice, giving people in accurately counted (or over-counted) areas more political power and resources than people in undercounted parts of the country.
Article I, Section 2 of the Constitution mandates that “representatives and direct Taxes shall be apportioned among the several States . . . according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons.” The language, on its face, confirms that citizenship is not relevant for the purposes of the count. Since 1950, no Census has included a question about citizenship.2
Even setting aside the three fifths clause, in the modern era, undercounts of Latino and African-American residents have been a besetting problem for the Census, but one that has been much abated over the past two Censuses (see Figure 1). The 2010 Census, which experts believe was the most accurate ever conducted, is nonetheless estimated to have undercounted African Americans by 2 percent and Latinos by 1.5 percent. Renters were also undercounted. By contrast, homeowners and whites were over-counted. Each of these undercounted groups tends to vote Democratic; the over-counted groups tend to vote Republican.(Figure 1: from Berman’s article, above)
By bizarre coincidence, the regions most likely to be adversely affected by the undercount are almost all Democratic-leaning. So far more than a dozen states have filed suit against the proposed citizenship question. The NAACP is suing the Trump Administration, claiming that the Commerce Department is planning to systematically undercount minorities (thus indirectly disenfranchising them) by not getting the word out to minority and low-income communities about the importance of full participation in the Census. Republican responses to these concerns have ranged from disingenuous disavowals to outright lies.
Concern about the politicization and corruption of the 2020 Census has been building for some time. The first alarms sounded a year ago, when Trump’s anti-immigrant rhetoric raised alarms about the willingness of the Administration to encourage immigrant communities to be counted. These concerns grew when Congress refused to allocate adequate funding for the Census Bureau to test new digital approaches to data collection (including email surveys and collection via tablets). This refusal may well have led to the resignation of Census Bureau Director John H. Thompson, but his departure flew below the radar because it took place on May 9, 2017, the same day President Trump fired FBI Director James Comey.
The politicization fears were amplified when Trump nominated Thomas Brunell as Thompson’s replacement. Brunell, a professor at the University of Texas at Dallas, once wrote a book titled Redistricting and Representation: Why Competitive Elections Are Bad for America, defending and extending partisan Republican arguments that gerrymandering should be used to segregate voters by party affiliation. Ultimately, Brunell withdrew and, as of this writing, the position remains unfilled. What Brunell overtly supported—the deliberate underrepresentation of Democratic voters—may now be accomplished more subtly, though a lack of leadership, untested equipment, and underfunding. Unsurprisingly, the nonpartisan Government Accountability Office has officially declared the 2020 Census to be a “high risk” project.
The Three Sources of Shared “Reality”
The alarming assault on the integrity of Census statistics is only the most visible aspect of what in the end may be the most lasting long-term consequence of the Trump Administration: the disruption of the belief in neutral government statistics that, for more than a century, have formed the epistemological foundation of shared social, economic, and political reality in the United States. To appreciate the significance and scope of this quietly unfolding disaster, one must step back and ask a fundamental question: How is it that we as citizens “know” what we think we know about the conditions of the country? Schematically, there are three basic ways.
The first and most basic source is personal experience. We should not underestimate the importance of direct experience in shaping the opinions of the proverbial man on the street. Talking to people at church and at work, at the bar and in the beauty parlor—in short, participating in the public sphere—is central to our sense of the political mood of our communities. Being personally affected by things like mass shootings, opioid overdoses, racial discrimination, or sexual harassment obviously affects our political views of these issues. “The plural of anecdote isn’t evidence,” as all social scientists are taught, but it does count for something.
Whatever direct experience is worth, most of us realize that our own immediate observations and experiences cannot provide a complete picture of collective reality. No individual can observe more than a small slice of social reality, and conditions and attitudes vary from place to place. So how do we move from our own personal observations about how things are going locally to an understanding of how this abstract, total entity called “the United States” is doing?
That takes us to the second source of our knowledge: “data”—information that in the large majority of cases traces back to sources collected and published by the government. Observing the level of activity in the mall or on Main Street gives us a feeling for whether commerce is doing well, for example, but a full understanding of how the economy is doing depends on the regular drumbeat of unemployment rates, growth rates, deficits, and so on. Government pronouncements about the “unemployment rate” published by the Bureau of Labor Statistics or the “inflation rate” published by the Federal Reserve matter as much as what we observe directly to our sense of how the economy as a whole is doing.3
Third, we get information from the media, or rather, as sociologists Paul Lazarsfeld and Elihu Katz explained in their classic 1955 study Personal Influence, from “opinion leaders”: braying television pundits, tweeting political leaders, preaching pastors, pontificating TAI columnists, and so on. These “narrative entrepreneurs” specialize in telling stories that make sense of the bewildering cacophony of discrete news items. Their stories may be more or less connected to reality, but they are crucial in shaping public opinion. Glenn Beck’s flow charts or Alex Jones’s lurid fantasies about the globalist conspiracy may do as much to shape our collective views of reality as any amount of personal observation or government-published data.4
The interaction between these three sources of information—direct observation, data sources that almost all trace back to government statistics, and the narratives of elite opinion makers—form the foundation of our sense of political reality. These three dimensions never align perfectly, and it would be an exaggeration to claim that together they fully represent objective reality. But each checks and filters the credibility of the other two. For example, a narrative entrepreneur who pushes a story radically at odds with our personal experiences or government statistics is not likely to be widely believed. Conversely, if narrative entrepreneurs and government statistics agree about some aspect of reality, we are more likely to discount our personal experiences that to call the narratives and statistics into question.
But what happens when government statistics are belied by our personal experience? For example, what happens when government statistics assert that we are in the midst of a ten-year economic expansion, but all around us we see shuttered factories, elite college graduates working at Starbucks, and tent cities in our urban cores? Under these circumstances, the role of narrative entrepreneurs becomes crucial: Is this a situation of “American carnage”? Or is it “the best economy ever”? Is the ultimate arbiter of reality merely the demagogue’s mood?
A Very Short History of Government Statistics
While we take government reports and data as commonplace, the provenance and availability of them is a relatively recent historical affair. Governments (and in the North Atlantic, churches) have long collected basic demographic data about total population: mainly births, marriages, and deaths.5 In antiquity, Roman and Chinese emperors wanted such information in order to more effectively levy taxes and raise armies. But until the early 19th century such data remained rudimentary. For the most part, sovereign authorities had little interest in the details of the lives of their subjects.
That began to change around 200 years ago. The transition from “subjects” to “citizens” marked a fundamental shift in the kind of attention that governments paid to the people. A century ago, Danish demographer and statistician Harald Ludvig Westergaard observed that the “era of enthusiasm” for statistics began about 1830, though other historians have pushed that date back further to the era immediately following the Bourbon restoration in the wake of the Napoleonic Wars that had sought to impose across Europe the political ideology of the French Enlightenment.6 In the 19th century, governments across the North Atlantic industrial core began collecting and disseminating what historian Ian Hacking has called an “avalanche of printed numbers.”7
The categories of data that governments chose to collect quickly multiplied as states sought to understand more and more about the messy and complex social reality of the populations they were governing. In each case, the particular new category entailed some statistician (typically a government bureaucrat) forging a more or less arbitrary set of distinctions, thus enabling data collectors to consistently code and catalogue a set of social conditions that, before the process of enumeration, had been understood in various qualitative (which is to say more or less vague or inexact) terms. These data became the “facts” that allowed not just the government but the public at large—increasingly literate and numerate as time passed—to begin perceiving the totality now called “society,” a term that, in every European language, had made a lengthy but revelatory etymological journey from a small-bore, elite class-inflected description to a broader and more inclusive one.
For the past two centuries, this avalanche of numbers has only grown, and the pace of its collected accelerated. Today in the United States, dozens of Federal agencies employ thousands of civil servants whose primary job is to collect, collate, and publish in a consistent and reliable manner social data that allows anyone who cares to get a good sense of the empirical conditions inside the United States across a whole host of dimensions. State and local governments also collect enormous volumes of data, some of which rolls up into the Federal data sets.8 Only rarely were these data collection, analysis, and publication efforts politically controversial. They were simply part of the regular tick-tock of governmental operations.
But collecting and above all publishing social statistics has never been a politically neutral act, even as it poses as one. In the first place, the chosen statistical categories have helped to create a shared communal reality. Government statistics replace the messiness of our quotidian realities with a world that seems, by virtue of enumeration and publication, to be better understood. While it is experientially obvious that New York City is a richer place than, say, Nashville—and as a matter of general lore that certain neighborhoods are “nicer” than others, or that some cities have more high-paying jobs, or better weather—the rise of government statistics has made it possible to define in much more precise terms what distinguishes one locality from another.
As Hacking pointed out, this enthusiasm for numbers represented “an overt political response by the state” to the implications of democracy. In the United States, the data avalanche was a byproduct of the Progressive movement—better government required detailed knowledge of what was going on “out there.” It’s no coincidence that the Bureau of Commerce and Labor was established in 1902 under Theodore Roosevelt with a mandate to create jobs and stimulate economic growth, and that that Labor part of the Department was separated out in 1913, the same year the constitutional amendments allowing income taxation and the direct election of Senators were ratified. The modern social sciences, which emerged during the late 19th century, were grounded in the analysis social statistics with a view to providing policy makers with “objective” policy advice, most often in support of Progressive policy objectives. The culmination of the Progressive movement was the presidency of Woodrow Wilson, the most accomplished social scientist ever to reach the White House, who had served as President of the American Political Science Association before becoming President of the country.
But the move toward data collection could also be conservative in motive, rooted in a desire for social control. Finding out more about citizens enabled the government to address their grievances, thus diminishing motives for political revolution or radicalism. (For the same reason, the famously conservative late-19th-century German Chancellor Otto von Bismarck was a founder of the International Labor Organization: Better a pro-labor forum under state control than one not under state control.) Whatever his progressive policy bona fides, Woodrow Wilson himself was of course also famously anti-radical.
Social statistics thus represented a kind of moral science with mixed objectives. For political entrepreneurs who wished to palliate citizens, good news could always be extracted from the myriad data sets. Conversely, those who jousted with the status quo could always find “facts” that tugged at the conscience of reformers. For example, the statistical agencies allow some interpreters to highlight job growth, low and stable rates of “core” inflation, supposedly falling divorce and violent crime rates, and so on, while others emphasize the decline in labor force participation, the explosion in housing costs, the growth in economic inequality, the rate of gun violence, declining life expectancy for white women, and so on. For decades, such debates over which statistics matter, and how, have formed the warp and woof of day-to-day policy debates.
While social scientists were largely sidelined from Federal policymaking during the Republican administrations of the 1920s and early 1930s, the New Deal marked a watershed for the arrival of social science in government. Not only did the government begin to collect and publish reams of new social statistics seen as necessary to build a welfare state, but many social scientists were also brought into the Roosevelt Administration in formal roles ranging from the Federal Reserve to the Department of Agriculture. In the case of economists, their role as policy advisers was formally institutionalized in 1946 with the creation of the Council of Economic Advisors. But sociologists, political scientists, and other social scientists have also played important roles. For example, Swedish economist Gunnar Myrdal’s arguments in An American Dilemma (1944) that American racial segregation, despite the purported experiences and narratives of white Americans, was damaging to African Americans were cited in Brown v. Board of Education (1954) as evidence in favor of overturning school segregation.9
Since the early 20th century, social statistics have thus been the stage upon which policy debates have played out; and the social sciences have been enrolled, for better and worse, in these debates. Those who had the skills to manipulate and interpret these data thus occupied a privileged position of authority in policy debates. We can cite Brown as a positive example of the use of government statistics and social science to redirect government policy, but there is something disconcerting about the way that government statistics can be deployed as if they are the sole arbiter of political reality, especially when citing such statistics becomes a means for short-circuiting political debate. Too often government data are invoked not only to create a false sense of clarity about empirical social, economic, and political reality, but also to effectively sideline the opinions of the less well-trained or well-informed. That these data categories were themselves only partial slices of complex social reality, born in particular historical circumstances, is often conveniently forgotten.10
In addition to privileging the political role of “data experts” (for example, social scientists), social facts as represented by government data also impose de facto limits on political debate. Since the numbers avalanche, “normal politics” has increasingly entailed debates not about the quality of the “facts” per se, but rather about which facts to prioritize (a values question) or what to do from a policy perspective in order to move the numbers (an efficacy question). Thus, for example, policymakers might reasonably argue that the Bureau of Labor Statistics’ indication that the United States is currently experiencing a 1.8 percent inflation rate and a 4.1 percent unemployment rate means that the Fed should cut (or increase) interest rates, or increase (or cut) fiscal spending. But what “normal politics” has normally not entail is a questioning of these BLS statistics themselves. These statistics have been taken as the baseline against which policy proposals are debated. These statistics, in other words, represent social truth, and only what to do about them has been within the realm of reasonable debate.
But if government statistics should not be treated as the final arbiter of shared social truth, public data nevertheless remain indispensable for creating a shared reality rooted in something other than private experience or mere opinion. We can and should fight to improve the quality of public statistics and cultivate a more sophisticated sense of what the statistics do and do not mean, but we must also maintain the insistence that the statistics, comprehensively collected and fairly analyzed, are indispensable to creating an epistemic consensus. While it is a reasonable to debate the political salience of any given government statistic, the real value of government statistics does not derive from their specific political interpretations but rather from the fact that they are collected consistently over time and place. It is reasonable to debate whether U3 or U6 is the better way to assess the health of the labor market, but the fact that both of these statistics are collected consistently over time allows us to objectively assess whether employment has gone up or down, and to compare the situations in Charleston, Chicago, and Cheyenne. Such shared truths are essential to keeping any political community intact.
The Revenge of the Narrative Entrepreneurs
All this may seem like mere common sense, but it’s not. Government statistics are a specifically constructed social reality, albeit one that’s been going on for so long that most us have a hard time even perceiving that these particular social facts are but a slice of the complex totality of social reality. They provide one particular aggregate measurement of a much more (indeed, infinitely more) nuanced set of communal, local, and personal realities.
These more nuanced realities are now the subject of a vast discourse, much of it playing out in today’s online cacophony. No longer are public statistics treated as an unquestioned source of facts, the baseline of “reality”; instead, narrative entrepreneurs increasingly rely on what the erstwhile counselor to the President, Kellyanne Conway, memorably referred to during a January 22, 2017 “Meet the Press” interview as “alternative facts.” The on-ramp for this view of empirical reality was captured by Ron Suskind in 2004 in a quote from President George W. Bush’s Senior Advisor and Deputy Chief of Staff Karl Rove: “We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors . . . and you, all of you, will be left to just study what we do.”
Conway’s invocation of “alternative facts” was a candid admission that the data conventionally taken to represent “the” definition of social reality also represents a major problem for people who want to enact policy that falls outside “conventional” boundaries. For narrative entrepreneurs, social facts are not just stubborn things, as John Quincy Adams observed, but also potentially insuperable obstacles. There’s good reason why the National Rifle Association, for example, has poured boiling oil and boulders down the side of the castle to prevent government collection or analysis of gun violence statistics. The NRA (read: the gun industry) is well aware that “facts” present a serious obstacle for promoting their narrative about the sources and effects of the proliferation of guns. Likewise, for someone wanting to claim, say, that America is experiencing some unprecedented form of “carnage,” and that this is somehow the fault of immigrants, it is highly inconvenient that government statistics show that violent crime is way down over the past thirty years, even as the number of immigrants has increased dramatically.
The phrase “alternative facts” also nicely summarizes the ongoing campaign on the part of our current President to undermine the factual basis of our shared reality. For those who want to break the technocratic limits on political debate, corrupting government data attacks the material basis of social scientific reason that some insurgent politicians believe is biased toward a progressive agenda. Without the data, the only sources left for defining political reality are personal experience and the echo effects of narrative entrepreneurs. Attacking government data is thus part of a war position between narrative entrepreneurs and the social scientifically-inclined who use facts as weapons.
Trump’s political career began, let us remember, with an assault on one sort of government fact, namely the birth certificate of then-President Barack Obama. During his campaign, Candidate Trump regularly questioned official unemployment statistics. “Don’t believe these phony numbers,” Trump told supporters in early 2016, just after he had won the New Hampshire primary. “The number is probably 28, 29, as high as 35 [percent]. In fact, I even heard recently 42 percent.” Indeed, a recurrent theme for Trump has been to attack the Bureau of Labor Statistics as purveying “total fiction.” On one level, these data were an example of Trump’s oft-noted inclination to bullshit, since there was no underlying “technical” or “scientific” basis for his claims. But in fact there was a strategy at play, one supported by other members of the Trump team. During his confirmation hearing before the Senate Finance Committee, for example, Treasury Secretary Steve Mnuchin declared, “The unemployment rate is not real. I’ve traveled for the last year. I’ve seen this.” Here was as direct an assertion one is likely to find from a senior policymaker that personal experience rather than impersonal statistics should define the basis for policymaking choices.
But to leave it at that is to miss something deeper at work. The truth is that government statistics are often faulty. Many are old measures and government institutions are slow to change, so in some cases the measurements no longer align with reality—like the way the FBI (mis)counts various categories of crime. Some statistics are politicized, as with the way the definition of poverty has changed over the years; and no doubt some academic social science disciplines outside of government are skewed politically, to the point that even internal critics complain that some more closely resemble political advocacy guilds than they do scientific communities. The same is true with inflation, as the infamous case of the Boskin Commission during the Clinton Administration illustrates. Some statistics are systematically misused thanks more to onset amnesia about origins than to partisan agendas, as the case of GNP noted above reminds us. All of these frailties crack open the door for those who now argue, for example, that NOAA hurricane forecasts are just propaganda marshaled in the service of the “climate change hoax.”
Since government statistics measure broad-brush phenomena, they are vulnerable to missing politically salient variations of space and demography. In poor white communities, for example, official data often contradict personal experiences. Official employment figures, whose methodology is more sensitive to some variables than to others, can underappreciate bouts of temporary unemployment, what it means to work in jobs at lower pay and requiring lower skill than they were used to, and a generalized sense of underutilization and precarity. Like a lot of economic data, without a sociological filter placed on them they are not self-interpreting in terms that matter to whole human beings who care as much or more about dignity as they do about income levels.
Take the “free trade” argument that comparative advantage left to run free is good for the lower economic echelons because it keeps the price of many goods low, thus stanching inflation, which tends to hurt poorer people disproportionately, as salary increases lag price increases. That is not a comforting argument for downwardly mobile working families who resent being told, in effect, by a bunch of statisticians and academics that the privilege of being able to buy a lot of imported junk should rank higher in their estimation than their ability to properly feed, clothe, and educate their children.
Yet at the same time, as discussed above, the data do impose limits on narrative entrepreneurs, in effect giving those with control over the data (or with the authority to interpret of the data) an independent source of political power. Which is precisely why governmental data are a major problem for the Trump Administration: because they provide sustained ammunition for “swamp dwellers” (that is, the experts) to attack proposals that emanate from the White House. Scarcely a day goes by where a Trump tweet or Sarah Huckabee Sanders statement is not called into question with reference to some government data source. Who are these people who have the cheek to keep publishing statistics that contradict the Boss? Drain the swamp!
The systematic corruption of government statistics is thus always and everywhere a central mechanism of modern authoritarian political control: It removes the “objective” guardrails that constrain the ability of demagogues to make up any story about social reality they find useful for promoting their interest-driven policy agenda. This explains why the quality of government numbers in dictatorships countries is almost always highly suspect. The economic and social statistics that were published out of the Soviet Union or Mao’s China, for example, were notoriously unreliable for decades on end. Along with control over the media, corrupting and calling into question the government data enables demagogues to monopolize the ability to define collective social reality, an effort that was mirrored in tragicomic, surreal ways in Hoxha’s Albania and still is in Kim’s North Korea.
In Trump’s America, the motives for the corruption of data are a little different. If you want to increase tax subsidies for oil producers, for example, it’s inconvenient when data reveals the effects of automobile emissions on anthropogenic climate change. If you want to cut taxes for your rich buddies, it’s inconvenient when the bean-counters declare that this is going to explode the deficit or exacerbate inequality. The immediate solution is to dismiss the bean-counters’ statistics as fake news; the medium-term solution is to hide the bean-counters’ results; and the long-term solution is to get rid of the bean-counters altogether. This is the correct frame within which to assess the Trump Administration’s proposal to reduce the Department of Agriculture’s Economic Research Service budget from $86 million to $46 million. (Congress rejected the reduction.)
A key area to watch going forward is the Trump Administration’s reversal of decades of open government norms by removing government data from public view: restricting funding for FOIA compliance at the Bureau of Land Management; scrubbing climate change data and devising spurious privacy concerns for releasing data at the Environmental Protection Agency; removing thousands of documents regarding animal welfare from the U.S. Department of Agriculture website; keeping the National Climate Data Center offline and “undergoing maintenance” for some eight months;11 and so on. Most of these changes are highly technical, taking place deep in the bowels of large bureaucracies that receive little if any scrutiny. In all likelihood, there is far more of this going on that even has been reported, since each individual instance is trivial and boring. It is only when one discerns the overall pattern that “the great countdown” becomes alarming.
The deliberate corruption of the Census is therefore not just a specific assault on the foundations of American democracy but part of a larger effort to undermine the epistemological foundations of our shared sense of political reality. Ultimately, this pattern is only explicable as part of the Trump Administration’s well-documented war against the administrative state, which is to say, a war on war on expertise as such. Expertise, after all, is built on the foundation and mastery of the details of data and methods of interpretation of those data. Destroying the data that expertise both shapes and rests on in effect valorizes private experience and narrative over public knowledge and expertise as the basis for shared reality and governmental decision-making. Indeed, a government free of experts is exactly what Trump promised. Even the people occupying positions normally reserved for experts have gotten the message. As Trump’s trade adviser Peter Navarro recently told Bloomberg News: “My function, really, as an economist is to try to provide the underlying analytics that confirm [Trump’s] intuition. And his intuition is always right in these matters.” Getting rid of the experts clears the path for the narrative entrepreneurs to run wild. Without data to depend on and plan from, there can be no administrative state.
What Is to Be Done?
Let’s note first what won’t work.
First, dry fact-checking does not work in the face of a deliberate assault on facts as such. Even the efforts of sites like Snopes or Politifact are like the proverbial Dutch boy trying to hold back a rising tide of preference for private experience and opinion leader narratives as the preferred means for Americans to define their social reality. By all means such sites should continue their work, but I doubt that even they believe they can succeed on their own.
Second, as bad as the problem of fake news is, governmental regulation of that phenomenon is an even worse idea. Given the predilections of our current Administration, governmental regulation of this problem is a classic case of the fox guarding the henhouse. And the current Administration’s destruction of confidence in governmental statistics is likely to endure long past the departure of Donald Trump from public life. It is all but inevitable, for example, that every single line that has been trotted out about Trump’s untrustworthiness will be recycled if and when the Democrats regain control in Washington.
Third, efforts to crowdsource information, following the model of “do it yourself science” (or what the Economist recently called “punk science”) may be a valuable exercise as a way to spur citizen engagement, but they cannot replace the consistently collected longitudinal data for which government has long been responsible. Even leaving resource issues aside, private organizations simply lack the authority and the staying power to collect all the different kinds of social data that the government currently does.
In the end, we as citizens must defend the integrity of government data as a shared public good. It may take decades to reconstruct the social trust in government data that ungirds our sense of shared reality and binds us together as a community. Preparing for that long-term process of reconstruction means that civil society actors and social scientists must continue to defend the integrity of data collection and dissemination systems.
Just as importantly, the media must continue to call out Trump’s attacks on their integrity and probity—while also holding themselves to ever-higher standards of truth-telling. Along with government statistical agencies, a free and diverse range of media are another source of resistance to the ability of narrative entrepreneurs to define reality however they see fit.12 And, I hasten to add, there’s no reason for anyone in the media to go out on a speculative limb against the Trump Administration; just reporting the plain facts is plenty outrageous enough. The modest aim is to minimize the damage to these core systems that form the foundations of our shared realities and political community.
1As the Government Accountability Office recently explained, “The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates—a key indicator of a cost-effective census—are significant and lead to higher costs because the Bureau sends enumerators to each non-responding household to obtain census data. As a result, non-response follow-up (NRFU) is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to match the results of prior enumerations.”
2That same clause also underscores that the politics of the Census have always gone to the heart of America’s obsession over race and racial representation. Non-taxed Indians (in other words, for the context of the 1780s, Indians in the territory beyond the direct control of the U.S. government) were not to count, but slaves were, albeit at a reduced rate (in order to limit the political power accruing to slave-owning citizens). As Paul Schor’s Counting Americans: How the US Census Classified the Nation (Oxford, 2017) has demonstrated, the Census has always held up a mirror to America’s deep anxieties about racial definitions and distinctions.
3The official core inflation rate excludes so-called “volatile items” like food and energy (e.g. the most important items on most people’s shopping lists); the common definition of U3 unemployment is “people without jobs who have actively looked for work within the past four weeks” (thus excluding the long-term unemployed as well as badly employed). The technical narrowness of these definitions, along with variable local conditions, explains why both “headline inflation” and “the unemployment rate” can be out of whack with people’s direct experience of price fluctuations. This misalignment creates space for narrative entrepreneurs to peddle conspiracy theories or the lie that the government numbers are fictions, as opposed to merely partial but objective representations of reality.
4Paul Lazarsfeld and Elihu Katz, Personal Influence: The Part Played by People in the Flow of Mass Communications (Free Press of Glencoe, 1955). It is no coincidence that Fox News, by the most effective propaganda force in contemporary America, devotes far more of its effort to “opinion journalism” than to mere reporting, even as it consistently elides the distinction between the two.
5The classical work is E. A. Wrigley and Roger S. Schofield, The Population History of England, 1541-1871: A Reconstruction (Harvard University Press, 1981) that analyzed birth, marriage, and death rates in Britain based on parish records. But, notably, these basic demographic data were virtually the only social statistics that were collected. Of course, many historians have worked to reconstruct the economic and social divisions within various societies, but they have done so not on the basis of government-collected statistics.
6Harald Ludvig Westergaard, Contributions to the History of Statistics (P.S. King & Son, 1932).
7Ian Hacking, “Biopower and the Avalanche of Numbers,” in Vernon Cisney and Nicholae Morar, eds., Biopower: Foucault and Beyond (University of Chicago Press, 1983).
8The Federal agencies include: the Bureau of the Census; the Bureau of Labor Statistics; National Center for Education Statistics; the National Agricultural Statistical Services; the National Center for Health Statistics; the National Climate Data Center; the Criminal Justice Information Services Division of the FBI; the Energy Information Administration; the Bureau of Economic Analysis; the Statistics of Income Division of the IRS; the Bureau of Transportation Statistics; the Office of Research, Evaluation, and Statistics at the Social Security Administration; and more. In positive news, many states, counties, and cities are committing to greater transparency by opening up these data sets to citizens who in many case are engaged in “civic hacking”—analyzing these data sets, making them user friendly for less technically adept residents, and providing ways for these governments to collect feedback about how they are performing.
9More recently, in a case currently before the Supreme Court about the constitutionality of “radical” gerrymanders that allow states with more or less evenly balanced numbers of Democrats or Republicans to be sliced up so that one party can more or less guarantee itself as much as 75 percent of the available seats, groundbreaking research by political scientist Eric McGhee and law professor Nicholas Stephanopoulos was introduced into argument by the plaintiffs—arguments that Chief Justice John Roberts dismissed as “sociological gobbledygook.”
10Given the way these data categories that are reported year in and year out become naturalized over time, it is easy to forget that these statistical categories are not in fact timeless but are products of particular historical and political circumstances. Perhaps the best known of these histories concerns the invention of that master statistic, Gross National Product, during the Great Depression. While efforts to measure “national income” date to the late 19th century, it was economist Simon Kuznets, working at the National Bureau of Economic Relations, who developed GNP as a tool for operationalizing Keynesian theories about the velocity of money—a tool which was then misleadingly adopted as a political proxy for wealth creation. Thus a tool created for a narrow technical purpose became central to the public conceptualization of the economy as a whole, in ways its originators never intended. See Timothy Mitchell, “Fixing the economy,” Cultural Studies 12:1 (1998); Robert W. Fogel et al., Political Arithmetic: Simon Kuznets and the Empirical Tradition in Economics (University of Chicago Press, 2013); Diane Coyle, GDP: A Brief but Affectionate History (Princeton University Press, 2015); Dirk Philipsen, The Little Big Number: How GDP Came to Rule the World and What to Do about It (Princeton University Press, 2015); Philipp Lepenies, The Power of a Single Number: A Political History of GDP (Columbia University Press, 2016); and Ehsan Masood, The Great Invention: The Story of GDP and the Making and Unmaking of the Modern World (Pegasus Books, 2016).
11Some climate scientists, anticipating this turn of events, reacted with such alarm that they took to downloading entire data sets to ensure they would not be destroyed. (Many are now preserved and kept public at DataRefuge.org.)
12This is precisely why Trump is engaged in a particularly sustained assault on media who break fact-based stories, ranging from efforts to yank NBC’s license to performing a cashectomy on the owner of the Washington Post.
You must be logged in to post a comment.