A
merica faces a crisis of innovation, and our survival as the world’s industrial leader depends on its resolution. Overall research and development spending has stagnated as a proportion of national income since the 1950s, and our competitors are gaining on us. Even worse, the kind of fundamental R&D that produces game-changing innovations rather than incremental improvements has declined sharply relative to the size of the economy.
More is at stake than America’s future growth prospects. Scientific innovation and national security have comprised two sides of the same coin since World War II. Not for a generation have America’s security requirements begged for innovation as much as they do today. In place of a few large, well-defined threats that America faced in the relatively stable power balance of the Cold War, we now face a very large number of smaller and poorly defined threats in a chaotic world environment. The Bush and Obama Administrations had hoped to foster a new kind of stability, either by promoting democratic regimes in unstable theaters or by fostering multilateral cooperation and engaging prospective adversaries. Those hopes have faded. Reluctantly, America’s foreign policy community has come to accept the premise that we will confront a kaleidoscope of shifting threats in an indefinite span of instability.
Regrettably, the military that America built to beat the Soviet Union is in many ways poorly equipped to address the new threat regime, and unfortunately that legacy force has not changed as much as it might have over the past two decades. Even if policymakers chose to expand on the existing platform, the public’s perception of poor returns to commitments of blood and treasure during the past decade rules out this option as a viable political stance.
Tentatively and in piecemeal fashion, the American military is introducing a new generation of military technologies. Drones, battlefield robotics and similar technologies cast a wide net over the new threat horizon, but their practical application remains limited. Cyberwar has become a buzzword for defense planners, and in consequence the application of large-scale computation to the vast amount of information acquired by remote sensors (“Big Data”) has become a theme du jour for defense research and development, as the volume of prospective threat data swamps existing capacity to process information. But the new threat profile, in which a large number of adversaries with cheap weapons in the service of suicidal fanaticism create an ever-expanding set of low-level risks, presents significant problems for sensing and information processing. Extracting an ever-fainter signal from an ever-greater volume of noise challenges our existing understanding of the problem. We know that we are in a different world, one that requires new technologies, but we can predict neither the precise character of the emerging threats nor the technological solutions with which we should respond.
Technological problems also beset con ventional weapons technology. The next generation of American fighter aircraft (the F-22 and F-35) offers only incremental improvements to the previous generation, for example. The most important constraint to aircraft speed and maneuverability, though, has long been the physical capacity of the pilot. Remotely piloted aircraft can in theory outfly anything presently in production, but the same problems of sensing and computation apply.
The political economy of defense also cries out for innovation. In the context of trillion-dollar deficits, it is unlikely that any political party will gain majority votes for a significant increase in military spending. Without creating efficiencies through innovation, defense planning cannot sustain political support for the foreseeable future, and America’s military industry will suffer as a result.
Given the evident need for technological innovation in the national security space, it is strange to hear economists argue that America is stuck on a technological plateau where innovation will play a vastly diminished role. During World War II, during the Eisenhower and Kennedy Administrations’ drive into space, and under the Reagan Administration’s military buildup, America responded to strategic threats with decisive innovations. America’s military prowess and economic preeminence intertwine to create a single story. America’s World War II mobilization lifted America out of depression, and the revived American economy won the war. The technologies advanced by the war drove American productivity growth for an additional decade and more. America’s Cold War military requirements brought Federal subsidies for research and development that gave us cheap semiconductors, commercially viable lasers, synthetic materials and a dozen other breakthrough technologies that later transformed the world economy.
Why should America fail to innovate in response to the next generation of strategic threats? And why should these innovations fail to attract the sponsorship of entrepreneurs as they have in the past? Although a substantial body of opinion holds that the age of innovation and ensuing economic benefits is behind us, the difference between our present meandering and our past progress may stem not from any technological plateau, but from a lack of strategic clarity and national resolve. There are, after all, other countries that perceive no such plateau. Israel, famously, is the most innovative economy in the world. Israel innovates because it has to; it lives in a neighborhood where first prize is the chance to compete for first prize next year. There is a constant interchange of personnel and ideas between Israel’s thriving entrepreneurial sector and the technological side of its national security efforts, including computation, avionics, remote sensing and robotics. On a much larger scale, this kind of symbiosis thrived in America during its periods of military and aerospace achievement. We have not lost the ability to innovate as much as we have lost the will and the strategic direction.
L
et us consider first the merits of the “end of innovation” argument. Robert J. Gordon, one of the most influential pessimists, put it this way:
The computer and internet revolution began around 1960 and reached its climax in the dot.com era of the late 1990s, but its main impact on productivity has withered away in the past eight years. Many of the inventions that replaced tedious and repetitive clerical labor with computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labor productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.1
The digital age, Gordon maintains, cannot compare in productivity impact with two previous periods of innovation, namely the railroads and steam engines of the first industrial revolution of 1750–1830, and the electricity and internal combustion engine of the second industrial Revolution of 1870–1900. Even this impact is exhausted, he argues, and America thus faces a prolonged period of stagnation.
It is true that productivity growth has slumped. The question is whether the slump is due to a natural evolutionary process over which public policy has little influence or to political choices that produced less than optimal outcomes. The end-of-productivity school pays inadequate attention to the way public policy has driven technology growth and fostered the commercialization of new technologies by the private sector. The productivity disease is not genetic but induced by bad policy, and can be treated by improved policy.
America has had productivity slumps considerably worse than the present one. The stagflation of the 1970s dragged productivity growth in non-farm business down to zero. Productivity growth recovered during the post-1984 expansion before declining again during the 2000s. America’s recovery from the productivity slump of the 1970s is cause for hope that we can recover from the present slump as well. This is critical for national well-being. If non-farm productivity growth remains stuck at 0.5 percent, economic growth will depend on expansion of the labor force. But the labor force, according to the United Nations’ medium variant, will grow by just 0.4 percent per year between now and 2050, leaving us with average annual economic growth of less than 1 percent. Gordon thinks that the best the United States can do during the next twenty years is 1.5 percent real per capita GDP growth, against a 1929–2007 average of 2.17 percent.
As bad as Gordon’s scenario looks, the result is likely to be far worse in the event that his grim assessment of American productivity turns out to be correct. Economies do not exist in a vacuum. If American technological innovation comes to a halt for a prolonged period, other countries with cheaper production costs will learn to do what American industries can do, and American manufacturers will continue to lose market share. China already boasts the world’s largest telecommunications equipment firm, Huawei. China is already producing nuclear power plants with Westinghouse technology under license from Toshiba, Westinghouse’s majority owner. Fear of Chinese competition is an important factor—probably the most important factor—in the sharp decline in venture capital commitments to high-tech industry during the past ten years.
South Korea has become an innovation powerhouse, too, and we see the economic impact, for example, in Samsung’s ascendancy over Apple in smartphone sales. China remains behind the United States with 1.8 percent of GDP devoted to R&D compared to our 2.8 percent, but it is catching up fast. China is now more concerned with acquiring existing technologies than inventing new ones. Nevertheless, it is educating a new generation of university students on a scale never before seen, with uneven quality, but with pockets of impressive strength. If China bests us at innovation some time during the next generation, the world will look radically different just three or four decades hence.
If U.S. innovation continues to attenuate, the weapons technology deployed by China and other potential adversaries will converge on America’s. The longstanding American technological edge is already eroding as other nations acquire airframe, avionics and stealth technology. The productivity story is more complex, though, than the pessimistic scenario suggests. It is not clear that the decline in U.S. productivity growth during recent years is the result of technological exhaustion. Lower manufacturing productivity may instead be the result of lower capital investment. The productivity trough of the 1970s was preceded by a sharp decline in the growth rate of private non-residential fixed capital formation, from a 6–8 percent range in the 1960s to a 2–4 percent range during the 1970s. The present productivity decline may be due to the worst performance for non-residential private fixed capital formation in postwar U.S. history.
It is possible, to be sure, that American manufacturers are buying less equipment because it is less profitable to do so, and for just the reasons Gordon cites. There is some reason to believe that this may be true in the technology sector. Examining the largest 665 technology stocks by market capitalization in the Finviz database, we observe that the average return on investment for 2012 for the largest fifty stocks was 9.3 percent, but -8.9 percent for the remaining 615 stocks. Some part of the productivity plateau may thus be blamed on declining capital investment and, by extension, on fiscal and regulatory disincentives to investment. America has the highest corporate tax rate in the industrial world, for example, and a reduction in taxes on corporate and other capital income would prompt greater capital investment. But it is also likely that the lack of new productivity-enhancing technologies, especially in the tech sector, constitutes a drag on investment. It certainly has not been profitable to deploy new capital into the tech sector recently.
It is impossible to disentangle the cause-and-effect relationship between investment and productivity growth from the available data. It seems likely that both factors are at work: Productivity growth is suppressed by fiscal and regulatory barriers to capital investment in some fields, while the lack of innovations suppresses capital investment in other fields. Even if we accept part of Gordon’s argument—that slowing innovation has dampened productivity growth—we do not need to accept his premise that the attenuation of technological discovery is a natural or predictable phenomenon. Innovation is inherently unpredictable. To cite just one famous example, when the Defense Advanced Research Projects Agency set out to create a communications system with multiple pathways for national security purposes, no one had the slightest notion that this would create the internet. Individual innovations are not foreseeable. But we can foresee that if national resources are applied to basic R&D to overcome fundamental problems in technology, the result is a jump in the rate of innovation, opening new fields for entrepreneurial ventures.
One critical but often underrated factor in productivity growth is the impact of basic R&D stemming from aerospace research and development. Between 1952 and 1964, as the Eisenhower and Kennedy Administrations responded to Russian development of nuclear weapons and space flight, R&D spending rose by more than an order of magnitude. During the Johnson, Nixon, Ford and Carter Administrations, though, Federal R&D spending grew very little. When America shifted budget priorities toward increasing Federal entitlements and funding the Vietnam War, Federal R&D spending declined. It rose, although not as fast as during the 1950s and 1960s, during the Reagan Administration under the impetus of the Strategic Defense Initiative and the rearmament program.
America’s response to Sputnik set in motion the eventual productivity recovery of the 1980s, with the fastest rate of increase of federally funded R&D in the nation’s history. It is difficult to identify the fundamental research component in overall R&D spending, to be sure, but a rough proxy is the percentage of Federal R&D spending. The Defense Department, NASA and the Department of Energy have provided a disproportionate share of funding for research with long-range objectives in basic science as opposed to incremental improvements on existing technologies. Federal R&D spending has fallen from nearly 2 percent of GDP in 1963, at the height of the Cold War and space program, to less than 1 percent during the past two decades. That may not sound like much, but as with falling infrastructure investment, 1 percent compounded over a decade or two is a very significant number.
E
conomic growth depends not only on Federal spending, of course, but also on technological innovations, and entrepreneurs who take risks to commercialize them. Absent innovation, entrepreneurs will find other things to do, like devising innovative new financial derivatives. But technological innovation will have as little impact as gunpowder and movable type had on the medieval Chinese economy unless entrepreneurs plunge into the chaotic, disruptive work of commercializing these technologies.
That’s why Kennedy’s moon shot and Reagan’s Strategic Defense Initiative had such lasting economic reverberations: They were accompanied by tax cuts and regulatory relief that made it easier for entrepreneurs to raise money from public equity markets.
Russia’s head start in the space race elicited a national effort to keep American technology in the forefront in the late 1950s and early 1960s. And the very real possibility that Russia might triumph in the Cold War motivated a comparable effort in the early 1980s. We need the same sense of urgency today. We have no guarantees that America will retain technological leadership. Britain dominated world industrial production in 1870 with a third of total output, but fell to a seventh of the world total by World War I. In 2010 China edged out the United States to become the world’s largest goods producer, with a fifth of the world total. Not only the diminished size of the Federal R&D effort but also the dissipation of its focus brought about America’s productivity plateau. American companies today fritter away substantial amounts in research and development directed toward marginal improvements in existing products. Playing small ball in R&D has little impact on productivity. Putting a man on the moon and beating the Soviet Union, by contrast, demanded real breakthroughs in technology. When governments direct R&D spending toward political pet projects, for example in ill-fated subsidies to the alternative energy sector, results are typically disappointing. And if government tries to stand in for entrepreneurs and commercialize technologies itself, it only generates losses for taxpayers. The fact remains, though, that the digital industries that propelled American growth after World War II all began with government R&D funding. Today’s computers, telecommunications and internet never would have emerged without government support to universities, national laboratories and private industry.
Another good example of a government initiative with huge (unplanned) commercial impact was the Apollo space program. Launched under the Kennedy Administration in 1961, its goal was to land a man on the moon within a decade. Creating the manned space capsule required rapid development of such new electronic technologies as low-power, high-performance computing devices, software and instrumentation. This meant creating more and more powerful chips and other devices. As a result, the Apollo program gave enormous impetus to advances not only in rocket technology, life sciences and support systems, but in microelectronics, displays and light-emitting diodes (LEDs). R&D contracts to develop these technologies were given to universities, national laboratories, research institutions and corporations. The technological products of that work ultimately infiltrated the commercial marketplace. By any measure the government’s investment in the technology of space flight reaped huge returns. One estimate of the benefit of the Apollo program is that for every dollar of R&D spent, seven dollars came back to the government in the form of corporate and income taxes from new jobs and economic growth.
No innovation better illustrates the division of labor between government and private entrepreneurs than the internet. Now the universal global medium for communication and commerce, the internet went from conception to commercial reality in thirty years. It was started not for a commercial purpose, but to address a specific communications problem among researchers. The internet as we know it grew out of a novel network, originally conceived by computer scientists in 1964, designed to let computers communicate. Its creators envisioned it as a communications tool for research institutions. They never imagined that it would become a major force in the world economy.
Actual deployment of such a network became possible only because of the independent invention in 1962 (published in 1964) of an early version of a packet-based digital communications software protocol, which eventually became the IP (Internet Protocol). As proof of the capricious nature of R&D, the IP came as a result of an unrelated Department of Defense research program at the RAND Corporation. The rationale for this program was the need for robust military communications to minimize disruptions to the system.
DARPA undertook the actual management of the first network to link research laboratories, called ARPAnet (later DARPAnet). It is noteworthy that AT&T, then the monopoly owner of the U.S. telecommunications industry, refused to operate such a network for fear of creating commercial competition for its established voice and data network. Large corporations frequently are just as hostile to innovation as are government bureaucracies. That is why the economy needs disruptive entrepreneurs.
Among the other notable outcomes of R&D funded under government defense and space programs were the standard manufacturing process for integrated circuits and commercially feasible lasers. Both innovations were developed at RCA, the company David Sarnoff built. Today practically all chips produced worldwide are made in the CMOS process, which began as a Defense Department project at RCA in the 1970s. Defense wanted to explore the possibility of creating computing chips with lower power dissipation than the then-current technology could produce. After successful completion of the project, CMOS technology was used to manufacture chips for avionic radar systems, among other applications. It found its way into the commercial market in the 1980s.
A second example, the development of semiconductor lasers at RCA Laboratories, was a program one of the authors (Henry Kressel) headed. The pioneering work on this technology was originally funded in the 1960s by the Defense Department to develop infrared searchlights that could illuminate a battlefield, but would be invisible to the naked eye. As the technology progressed in the late 1960s and early 1970s, it became clear that it would be possible to use such lasers in fiber optic communications systems. RCA announced a commercial laser in 1969 that was based on technology developed largely under Defense Department funding. Companion technologies also sprang up that greatly expanded the ways in which lasers could be used. This led to their current status as not only the key to all fiber optic communication systems, including voice and data networks, but also as the enabling technology of millions of instruments, DVD players and a host of other devices.
Do these examples of government-funded technologies seeding great industries constitute a unique series of events, or do they epitomize a highly effective general approach to industrial development? Free marketers and proponents of state control may debate that question, but the fact remains that government-sponsored research and development does eventually migrate into the commercial and industrial markets. In the United States, at least, the government is still a major funder of innovative R&D that has broad applicability outside narrow defense applications. The 2010 Federal R&D budget of $147 billion covers a vast scope of activities, from medical science to new sources of energy. Corporate funding of basic research, on the other hand, has waned. Corporations are focused on product-oriented development programs aimed at swiftly producing results in the marketplace. They are much less invested in long-horizon projects that may or may not produce breakthrough innovations.
A 2008 study by Block and Keller surveys the sources of U.S. industrial innovation between 1970 and 2006. It confirms the increasing importance of government funding for R&D, and the continuing abdication of the field by corporate entities. During this period, as documented in the study, large firms contributed a declining fraction of the innovations, consistent with the decline in corporate research laboratories, while government-funded contributions from universities and national laboratories increased. Block and Keller sum up the situation this way: “If one is looking for a golden age in which the private sector did most of the innovating on its own without Federal help, one has to go back to the era before World War II.”
Governments can build an industrial economy through subsidies and other incentives to promote new businesses, but only if they bring in proven technology and experienced management, usually from abroad, the way Colbert did in France in the 17th century and as government planners have done, to some extent, in China. But that strategy has its limitations; you’re always following the leaders and living off their leftovers. Ultimately, if you’re going to plan your economy for true growth, you have to stimulate domestic innovation.
In a controlled economy this means picking winners, and governments are notoriously bad at picking winners. Private companies and industry experts may not be any better at predicting the next big technology winner than governments, but the private sector allows the freedom of failure and also allows successful ideas to percolate to the top as long as an entrepreneurial culture exists and risk capital is available to fund new ventures. Government is best at generating innovations through funding of research and development. After that, it should let the inventors and innovators plot the course instead of the bureaucrats. Government should only directly support the development of actual products that it needs to accomplish a clear objective within a unique “project”, such as the space program.
With every good intention, however, the Federal government has shifted an increasing proportion of national resources away from fundamental research in favor of largely unsuccessful experiments in industrial policy. In 2011 the Obama Administration offered $24 billion in alternative energy subsidies, including $16 billion to renewables and grants to 1,500 private firms. Unfortunately, many solar energy companies failed, leaving the taxpayers with the bill for bad technology or poor market decisions in the private sector.
The role of government should be to promote technological discovery, especially where it has a direct bearing on national security. But governments should not try to pick winners among emerging industries, much less finance the building of production facilities. That should be left exclusively to risk-taking entrepreneurs, who are the unique agents of game-changing innovation. Established big companies are best at innovation when investing to sustain an existing market. Their weakness is a tendency to stick with evolutionary technology rather than doing higher-risk development for unproven markets. To offset this, established companies often buy new entrepreneurial companies that have demonstrated the value of a new market or business. Creating a pool of innovative new companies drives economic growth. This brings us to the next point: Where do these innovative companies come from? They emerge from a “bottom up” process of entrepreneurship. Chaotic and visionary innovation, combined with access to venture capital and markets, yields economically important innovation. It requires an environment that supports this kind of activity, including government policies that encourage new business formation.2
Even the cleverest people in the industries that went through this great transformation could not predict the outcome of the basic research they conducted. AT&T, as noted, ignored the internet. Xerox, the inventor of the graphical user interface and the laser printer, was too focused on its copier business to exploit its discoveries. Innovation is inherently unpredictable, both in its technological and commercial outcomes. Ultimately, private risk capital is needed to build industries. The role of government is to provide an environment that encourages risk-taking, with the expectation of financial rewards commensurate with the risk level.
That is why we reject the idea that the age of innovation is at an end. The United States can address strategic threats and technological changes the same way it has done so in the past.
W
e do not have unlimited time to change course, however. During the 1950s, Soviet competition aroused America’s sense of urgency: If we did not surpass the Russians both in innovation and commercialization, we might lose the Cold War. We should have the same sense of urgency today. China now awards twice as many STEM doctorates as the United States and spends 1.6 percent of GDP on research and development, up from just 0.4 percent in 1998—the same rate as the European Union. American R&D spending remains higher than China’s, but the gap is closing. China, moreover, has focused its spending on key strategic areas such as satellites, ballistic missiles, stealth aircraft and unmanned aerial vehicles (UAVs). China’s leaders have read our economic history and hope to emulate the American defense R&D driver. China’s top-down economy is inherently less capable of generating entrepreneurial innovation than America’s. But China does not have to innovate; it only has to assimilate the innovations of others in order to assert economic power.
China does not now present a military threat to the United States as did the Soviet Union during the Cold War. Without a return to the formula that won the Cold War and sustained American productivity during the second half of the 20th century, though, America faces gradual economic and strategic marginalization. Our competitive position today is more challenging than it was in 1981, when Ronald Reagan took office. Thirty years ago America had the world’s only free capital markets, an unchallenged industrial base and the pick of the world’s talent. Financial and human capital today have alternative destinations. We are the prospective victims of our own success, as our competitors adopt elements of our model. If we fall behind this time, it will not be so easy to come back.
1Gordon, “Is US Economic Growth Over? Faltering Innovation Confronts the Six Headwinds”, Policy Insight (September 2012), Centre for Economic Policy Research.
2See Henry Kressel and Thomas V. Lento, Entrepreneurship in the Global Economy: Engine for Economic Growth (Cambridge University Press, 2012).