Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream
Public Affairs Books, 2019, $28.00, 368 pp.
Moore’s Law, which posits that the number of transistors on a microchip will double every two years, is not, strictly speaking, a scientific law at all. It is not immutable. It is reliant on intellectual capital. And as, Jonathan Gruber and Simon Johnson show in their new book Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream, it is reliant above all on the financing and underwriting of that capital—which the U.S. government used to do in a substantial and serious way, and which the authors argue it must do again.
Gruber and Johnson, both MIT professors, argue that Federal government funding of research and development (R&D) is essential to America’s future. Such investment can lead to new technologies and invigorate the American economy with millions of new jobs.
Many of America’s opinion-makers are skeptical of such “industrial policy” schemes. They believe that private industry and the free market can and must lead innovation. But Gruber and Johnson cite key reasons that limit the willingness of private companies to invest in basic research. Free-rider concerns discourage research at the private level. The loss of proprietary information causes corporations to hoard such information, leading to duplications of effort. Even the development side of R&D is often hard to justify to shareholders demanding quick returns. Take Big Pharma, for instance: intensive clinical drug trials take years, during which time patents expire, leaving companies little to capitalize on. What about venture capitalist firms? Why can’t they take up the slack? They can, but only a little. Gruber and Johnson point out that the research-to-retail journey often involves a lengthy “valley of death,” during which no profit is shown. For many investments with long-term value even the biggest and bravest VC firms can’t, or won’t, make the voyage.
Indeed, for all the techno-prowess of Silicon Valley and for all the entrepreneurial brilliance of the Big Tech and FAANG (Facebook, Apple, Amazon, Netflix and Google) companies, their reach over the American economy is not as extensive as one might think. They still only employ a small percentage of the American work force. And despite FAANG talk of flying cars and Mars colonies, they still invest, according to Gruber and Johnson in “new products, but not in basic science.” And they are predictable in where they will go to set up shop—to wealthy, elite coastal states, not to flyover, Hillbilly Elegy terrain.
What is needed, say Gruber and Johnson, is public R&D investment throughout the United States that will jump-start the American economy. The Federal government should adopt something like a “technological hub index system” and select underserved metropolitan areas that have sufficient populations, educational institutions, and quality-of-life approval ratings to warrant the placement of the hubs. And not just Cambridge or Palo Alto: the authors choose places ranging from Rochester, New York (ranked first), to Dayton, Ohio (19th); from “Lawrence/Manhattan/Topeka, Kansas” (40th) to Fargo, North Dakota (70th); from Eugene, Oregon (78th) to Atlantic City/Hammonton, New Jersey (102nd). With a commitment of “$100 billion a year,” such R&D would likely be enough to “propel us back to our world leadership position.”
One could argue with all this. One could counter that their proposals, especially on how the funding gets determined in Congress (as a multiyear, rather than single-year, appropriation) and on who or what determines what techno-hub goes where (a bipartisan commission, similar to the Base Realignment and Closure process) are either unrealistic or too lightly argued to be convincing. But Gruber and Johnson have not written an academic treatise. While not quite a manifesto, Jump-Starting America is a clarion call that seeks to draw public attention to real, pressing issues. Their proposals are not grand enough to fully restore American pre-eminence, but they would be a serious start. And the authors make their case in a style that is engaging, accessible, and generally free of partisan rancor.
What is most interesting and noteworthy about the book is the intellectual ballast of their argument, which is based not in reams of economic data but in history. They argue that “the post 1945 years, with tweaks to reflect modern realities show how [publicly driven R&D] can be done.” This is an essential lesson for our times. The massive publicly funded research of the Defense Advanced Research Projects Agency (DARPA), the National Aeronautics and Space Administration (NASA), and other organizations in the 1950s and 1960s developed the new technologies that laid the base for both winning the Cold War and America’s leadership of the IT revolution. But few today know this story or appreciate the magnitude of what followed.
Silicon Valley entrepreneurs are proud of their innovation records, but many have forgotten (or never understood) the base of public research on which they built. For example, Bill Gates said in 1998: “The PC industry is leading our nation’s economy into the 21st century… There isn’t an industry in America that is more creative, more alive and more competitive. And the amazing thing is all this happened without any government involvement.” But Linda Weiss, in her own excellent book on the role of public research in American innovation, America, Inc., points out how wrong this view is. As she puts it:
From the GPS to the cell phone, from the mouse to the Siri voice-activated personal assistant application on the new iPhone or to Google Earth, Google Translate and indeed Google’s search engine—all have one thing in common. They, like the internet and the IT revolution that preceded it, emerged from patient federal investment in high-risk innovation, focused in the main on national security objectives.
Gruber and Johnson do a good job of telling the story of government investment in science and research, which they personalize by focusing on the contributions of Vannevar Bush, the Raytheon founder and former MIT Engineering Dean who promoted government investments in World War II and in the Cold War push after Sputnik. Bush was an old-school New England Republican who despised FDR’s New Deal, but, like other internationalist Republicans whom FDR relied upon, he saw the Axis threat as existential. It was Bush who established the National Defense Research Committee (NDRC), a “breakthrough” solution, as the authors put it, to bureaucratic blockage. The NDRC brought together, under governmental supervision, U.S. private sector innovation from all corners. And Bush created the postwar public/private innovation model: government provided the funding and private entities did the research in universities and company labs (like Bell Labs). Gruber and Johnson thus challenge the modern idea that “nothing government does works.” As they point out, these government investments not only worked, but produced some of the highest long-term pay-offs of any investments ever. The Bush program was the economic equivalent of Acheson’s “present at the creation” in politics.
But Gruber and Johnson downplay a critical element of these governmental initiatives: they were motivated first and foremost by national security. The first wave of government investments in science came in World War II, when security was obviously the driver. The next wave came after 1957, when Sputnik caused Americans to be seized with fears of Soviet technological advantage. Ike was not a big fan of military spending, but the public and expert reaction to Sputnik was so powerful that he started major public investments in science and technology that would eventually rise to 2 percent of GDP in 1964.
National security concerns were not a side issue; they were the main driver. Without them, America would never have made the sustained commitments of the nation’s treasure that ended up transforming the American economy. Gruber and Johnson acknowledge that security motivations played a role—they say that the postwar economy was “helped greatly by inventions that emerged from the simplest non-commercial motivation: patriotism and fear of a smart enemy, hell-bent on new applications of scientific knowledge.” But this understates the role of national security—it didn’t just “help greatly;” it was the sine qua non.
The critical role of national security in technology investment started in World War II. According to Gruber and Johnson, the military had no interest in science before the war, but the pressure of wartime competition broke down the barriers. The process was led by Vannevar Bush. Scientific contributions in World War II were more engineering than basic science. Sputnik changed all that.
Three things about the post-Sputnik period stand out. First, the government was willing to take a long-term view. It wanted national security pay-offs, but it knew that the way to get them was to invest in basic science. Second, DARPA and other organizations were risk takers. As one DARPA official quoted in the book puts it, “If half the people don’t respond to a publicly announced challenge saying it’s impossible, we haven’t set the bar high enough.” Third, the government had no guarantee that its investments would pay off. They ended up having transformative effects on the U.S. economy, but this would not become evident for another ten to 20 years. In a sense, America discovered the transformative potential of science and technology by experiment—it made the investments for national security reasons and later discovered that they could transform the economy.
Of course nothing lasts forever. Eventually the postwar government/private-sector R&D symbiosis wound down. Gruber and Johnson posit a variety of socio-cultural reasons: the Vietnam War, the loss of faith in science’s ability to solve the world’s problems, the growing environmental movement, and the fear of thermonuclear destruction by weapons (“Einstein’s monsters”). But the final, decisive decline of governmental investments in science came with the collapse of the Soviet Union and the ending of the Cold War—which removed the external threat that had gotten the movement started in the first place.
While Gruber and Johnson downplay the role of national security in driving public investment in science, they at least address the issue. Often academics have tended to shy away from the link between national security and innovation—perhaps for fear of being branded Cold Warriors. But the idea of external security threats motivating national innovation pushes has been around for a long time. Scholars have argued that Meiji Japan embraced modernization and because of fears of being colonized, that South Korea industrialized in response to threats from the north, that Taiwan industrialized to secure its independence from China, and that Deng Xiaoping launched China’s reform push because he saw China being surpassed by other countries in the region.
Political scientist Mark Zachary Taylor’s book The Politics of Innovation (2016) examines and puts the external threat argument in a broader political context. According to his theory of “creative insecurity,” internal domestic rivalries and differences among interest groups normally make it difficult or impossible for governments to launch state-supported innovation pushes. Such pushes only happen when external developments are seen to be so threatening in military or economic terms that they allow or force national leaders to embrace innovation in order to defend the nation. External security threats are critical, but it is the balance between external pressures and internal factions that determines whether governments act.
Taylor provides plenty of empirical data and detailed case studies to support his thesis, which seems to have considerable explanatory power. However, Taylor is wary of taking his arguments too far. Perhaps he doesn’t want to be perceived as advocating that nations conjure up external threats to motivate innovation pushes. To avoid this, he argues that the notion of external threats should be modified in the contemporary world to refer to challenges that are less military or security-focused—challenges such as nuclear proliferation, climate change, disease, and aging.
But his argument is not especially persuasive. None of these other challenges carry the weight of a threat to survival or national independence, which is historically what has normally been required for governments to act. Taylor himself gives the reasons why external pressures have to be substantial—in order to overcome domestic obstacles. His model (without his amendments) seems to fit the historical U.S. experiences described by Gruber and Johnson quite well.
Unfortunately, in making their case for a new public push on science and technology, Gruber and Johnson make the same kind of mistake Taylor does. While nodding to the economic and security threat from China, Gruber and Johnson mainly base their case for public science on domestic economic challenges such as slowing growth, growing disparities in wealth and income, and the division of the United States into high and low-growth areas. They not only want to increase national investment in science, but also tailor it to cities and regions that have fallen behind.
This is a noble vision and may make sense from the point of view of promoting U.S. prosperity and economic inclusion. But it is miles from the kind of external security threat that motivated American investments after Sputnik. Taylor’s theory suggests that major security or economic threats are required and America’s own experiences in World War II and the Cold War point to the same conclusion. It would be nice if domestic economic problems could motivate a national science push, but in practice this doesn’t seem likely to happen.
That does not mean that Gruber and Johnson’s proposal to jump-start America is ill-advised. It just means they need to give it a stronger, more convincing rationale. If the Soviet threat motivated America’s earlier science push, today we need to look especially at the economic and security threat posed by China.
Yes, there are many differences from the 1950s. China’s leaders have not said they will bury us. We are not in a Cold War with China. And there has been no Sputnik moment. But unlike the Soviet Union, China has a strong and fast-growing economy. The U.S. National Defense Strategy is now mainly focused on great power competition with China. The Trump Administration is in a trade conflict with the Chinese, and Americans are angry at Beijing’s theft of U.S. technology.
The last 20 years have seen a major shift of manufacturing from America to China, as American and other multinational companies outsourced low-wage phases of production. This has been a major boon to China’s growth, but has imposed costs on the American economy and on less-skilled workers (productivity has slowed and wages have fallen). Some of this has happened by natural market forces, but China has also used predatory practices to increase its access to Western technology (IP theft, forced technology transfers, and purchases of Western companies).
Moreover, the economic threat from China is growing. As Dennis Blair and Robert Atkinson noted in a 2018 article in this magazine, President Xi Jinping has stated his intention to make China the “master of its own technologies”—to build national champions in a wide array of high-tech sectors and to compete with Western companies in global markets through the “Made in China 2025” plan. As Blair and Atkinson rightly point out, the U.S. response to all this has been slow, confused, and somewhat incoherent. And while China may find it difficult to achieve global dominance, especially in IT and cutting-edge technology, its low costs, cheap capital, and export subsidies will make it a stronger competitor over time.
China probably does not intend to challenge the U.S. militarily any time soon. Its strategy is to build its economic and technological power and overtake the United States in whole-of-nation strength, which it may then use for geopolitical ends. America is not badly positioned for economic and technology competition with China, but we are punching below our weight because many companies are not investing for the future and public investment in R&D, infrastructure, and workforce skills is way below where it should be.
These shortcomings could be addressed by a more ambitious version of the policy proposals that Gruber and Johnson have put forward. Rather than investing only in science in disadvantaged regions, the U.S. could enact a comprehensive program for strengthening growth and jobs—involving elements such as tax incentives for companies to invest in disruptive technologies and worker training; public investments in quantum, artificial intelligence, and cyber technologies; restoration and updating of infrastructure; and attracting high-tech workers from other countries.
Gruber and Johnson make some good points: America does need a stronger public/private innovation ecosystem, the Sputnik era does suggest some ways to get there, and a new push on science and technology should be non-partisan. But to compete with China we need larger and more open-ended investments. History suggests that the only way to convince the American people to accept those costs is to link the investments to national security. That is not war-mongering; it is realism.