Presidential administrations tend not to be remembered in the same way they were regarded while in office. Proximity breeds weariness, disappointment and often contempt. Distance—if by that is meant the cooling of passions that comes with retirement, together with proximity to presidents who have followed and to mistakes they have made—tends to foster reconsideration, nostalgia and even respect. That’s why the presidential libraries of even the least remarkable presidents continue to attract visitors.
George W. Bush, whatever else one might say about him, has been a most remarkable President: Historians will be debating his legacy for decades to come. If past patterns hold, their conclusions will not necessarily correspond to the views of current critics. Consider how little is now remembered, for example, of President Clinton’s impeachment, only the second in American history. Or how President Reagan’s reputation has shifted from that of a movie-star lightweight to that of a grand strategic heavyweight. Or how Eisenhower was once believed to be incapable of constructing an intelligible sentence. Or how Truman was down to a 26 percent approval rating at the time he left office but is now seen as having presided over a golden age in grand strategy—even a kind of genesis, Dean Acheson suggested, when he titled his memoir Present at the Creation.
Presidential revisionism tends to begin with small surprises. How, for instance, could a Missouri politician like Truman who never went to college get along so well with a Yale-educated dandy like Acheson? How could Eisenhower, who spoke so poorly, write so well? How could Reagan, the prototypical hawk, want to abolish nuclear weapons? Answering such questions caused historians to challenge conventional wisdom about these Presidents, revealing the extent to which stereotypes had misled their contemporaries.
So what might shift contemporary impressions of President Bush? I can only speak for myself here, but something I did not expect was the discovery that he reads more history and talks with more historians than any of his predecessors since at least John F. Kennedy. The President has surprised me more than once with comments on my own books soon after they’ve appeared, and I’m hardly the only historian who has had this experience. I’ve found myself improvising excuses to him, in Oval Office seminars, as to why I hadn’t read the latest book on Lincoln, or on—as Bush refers to him—the “first George W.” I’ve even assigned books to Yale students on his recommendation, with excellent results.
“Well, so Bush reads history”, one might reasonably observe at this point. “Isn’t it more important to find out how he uses it?” It is indeed, and I doubt that anybody will be in a position to answer that question definitively until the oral histories get recorded, the memoirs get written, and the archives open. But I can say this on the basis of direct observation: President Bush is interested—as no other occupant of the White House has been for quite a long time—in how the past can provide guidance for the future.
Presidential Doctrines
Presidents who’ve sought to shape the future have generally done so by proclaiming doctrines, mostly unsuccessfully. A few, like those of Monroe and Truman, have indeed influenced succeeding Administrations for decades to come—in Monroe’s case, for well over a century, in Truman’s for almost half a century until the Cold War came to an end. Most doctrines, however, faded from view as soon as the Presidents who announced them left office, sometimes even before they did. Who today, apart from historians, remembers the doctrines of Hoover, Eisenhower, Nixon, Carter, Reagan or Clinton?
Three things, I think, made the Monroe and Truman Doctrines transfer across time and space: They drew on a long history, they related that history to a current crisis, and in doing so they set a course the nation could feasibly navigate into the future.
The Monroe Doctrine reflected a long American tradition—extending well back into the 18th century—of associating liberty, prosperity and security with continental expansion. Its principal author, Secretary of State John Quincy Adams, related that history to the crisis caused by the apparent intention of European monarchs—Great Britain’s excepted—to reestablish their colonies in the Western Hemisphere after Napoleon’s defeat. The course Adams set was that “the American continents, by the free and independent condition which they have assumed and maintained, are henceforth not to be considered as subjects for future colonization by any European powers.” Its feasibility lay in the fact that the British tacitly agreed with that policy and were willing to use their navy to enforce it. The Monroe Doctrine was unilateral, as presidential doctrines must be. But it was based upon a realistic calculation of power within the international system, as all doctrines should be.
The Truman Doctrine drew upon an equally long American tradition—reinforced by involvement in two 20th-century world wars—of opposing the domination of Europe by a single hostile power. Its principal author, then-Under Secretary of State Acheson, related that history to the crisis caused by the outcome of World War II, which left the Soviet Union in control of half of Europe. The course he set was that “it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.” Its feasibility lay in George F. Kennan’s great insight that the Stalinist system and the international communist movement carried within themselves the seeds of their own destruction, so that the passage of time would favor the West if it could hold the line. The Truman Doctrine, like the Monroe Doctrine, was unilateral; but it, too, was based upon a realistic calculation of power within the international system.
Neither of these doctrines promised immediate results. Both looked beyond the crises that gave rise to them—beyond even the administrations that proclaimed them—to say, in effect: “Here’s where we’ve been as a nation, and in the light of that, here’s where we need to go.” Both functioned as beacons, providing the guidance necessary for the course corrections ships of state must from time to time make. And both did so within a single memorable sentence.
The Bush Doctrine
So is there a Bush Doctrine, and if so will it meet this test of transferability? To answer this question, I’d look first for a statement delivered in a suitably august setting: Durable doctrines don’t appear as casual comments. Then I’d look for one that’s clearly labeled as a policy, not as a portrayal of adversaries or an explanation of methods for dealing with them: That’s why terms like “Axis of Evil” or “preemption” don’t constitute doctrines. Finally—especially in an historically conscious president—I would look for historical echoes.
The speech that best fits these criteria is the one President Bush delivered from the steps of the Capitol on January 20, 2005. As a student of Lincoln, he would have attached special meaning to the term “second Inaugural Address.” That was the moment to draw lessons from a past extending well beyond his own, to apply them to a current crisis, and to project them into an uncertain future. And indeed the President did announce—in a single memorable sentence—that “it is the policy of the United States to seek and support the growth of democratic movements and institutions in every nation and culture, with the ultimate goal of ending tyranny in our world.”11.
Full disclosure: I suggested including the idea of ending tyranny in a session with the President’s speechwriters on January 10, 2005. Correlations, however, are not causes.
Initial responses, as usually happens with presidential doctrines, were mixed. Peggy Noonan, who wrote some of Reagan’s best speeches, described it as “somewhere between dreamy and disturbing.” George Will grumbled that “the attractiveness of the goal [is not] an excuse for ignoring the difficulties and moral ambiguities involved in its pursuit.” But the editors of the New York Times unexpectedly liked the speech, observing, “Once in a long while, a newly sworn-in president . . . says something that people will repeat long after he has moved into history.”22.
Noonan, “Way Too Much God”, Wall Street Journal, January 21, 2005; Will, “The New Math: 28 + 35 = 43”, Newsweek, January 31, 2005; “The Inaugural Speech”, New York Times, January 21, 2005.
If that’s right, then President Bush may have proclaimed a doctrine for the 21st century comparable to the Monroe Doctrine in the 19th and early 20th centuries, and to the Truman Doctrine during the Cold War. Only historians not yet born will be able to say for sure. Even that possibility, however, should earn Bush’s memorable sentence greater scrutiny than it has so far received. For it raises an issue that future administrations—whether those of Obama, McCain or their successors—are going to have to resolve: If the goal of the United States is to be “ending tyranny in our world”, then is encouraging “the growth of democratic movements and institutions in every nation and culture” the best way to go about it?
The historical record is not reassuring. James Madison warned in the tenth Federalist that “democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths.” Now, obviously Madison had less experience than we do with democratic governance. For him and the other Founders—steeped as they were in a far better classical education than most of us have today—the cautionary examples were those of Periclean Athens, which blundered into and then lost the Peloponnesian War, and the Roman Republic, which sank so deeply into corruption and violence that its citizens welcomed the benign authoritarianism of Caesar Augustus.
It’s not just ancient precedents, though, that ought to give us pause; so, too, should the subsequent history of the American Republic. By the middle of the 19th century, more people had the right to vote in the United States than anywhere else in the world. That didn’t prevent a third of the country from using that right to defend slavery, or all of it from blundering into one of the bloodiest of civil wars.
Nor was democracy a reliable safeguard against 20th-century tyranny. World War I, which laid so much of the groundwork for despotism, began with widespread public enthusiasm. A free election brought Adolf Hitler to power in Germany in 1933, and he retained the support of most Germans well into the war he started. A persistent American fear throughout the Cold War was that much of the rest of the world might voluntarily choose communism; that led to enlightened counter-measures like the Marshall Plan, but also to unsavory alliances with anti-communist dictators. And the post-Cold War collapse of Yugoslavia together with the events in Rwanda evoked an even more disturbing vision: that people could hate one another to such an extent that ethnic cleansing, even genocide, might have democratic roots.
Today it seems clear that the people of Russia, if they could have re-elected their increasingly authoritarian President, would overwhelmingly have done so: There is in that country, as there was in ancient Rome, a backlash against democratic excesses. China over the past three decades has made greater progress toward prosperity than at any point in its long history, but this has not come through democratic procedures. Reasonably fair elections have at last been held in the Middle East, but the results have empowered Ahmadinejad in Iran, Hamas in the Palestinian territories, and—despite the courage with which Iraqis risked their lives to vote in three successive elections in 2005—a government in Baghdad that has yet to establish order despite the full military support of the United States and its coalition allies.
So if ending tyranny is what you want to accomplish, promoting democracy in and of itself may not be enough. Something more seems to be required.
Democracy’s Prerequisites
No one has explained more clearly what that is than Fareed Zakaria in his 2003 book The Future of Freedom.33.
Zakaria, The Future of Freedom: Illiberal Democracy at Home and Abroad (Norton, 2003).
Democracy, he acknowledges, is a worthy objective, but certain things have to come first: personal security, political stability, economic sustainability, the rule of law, the sanctity of contracts, a working constitutional structure. You can’t just topple a tyrant, hold an election and expect a democracy to emerge. How, then, did the idea get started that democracy could sprout where it had no roots?
The story begins, I think, with the end of the Cold War, which left theorists of international relations scrambling. One of the purposes of theory, after all, is to predict the future, and the theories they had built over the past several decades had conspicuously failed to do that. New theories were needed. The ones that emerged focused on certain hitherto neglected characteristics of democracy.
One was its universal appeal, a claim best made in Francis Fukuyama’s 1992 book The End of History and the Last Man.44.
Fukuyama, The End of History and the Last Man (Free Press, 1992).
Fukuyama pointed out that democracy, over the past two centuries, had advanced steadily across regions, cultures and ethnicities, so that the end of the Cold War was really the culmination of a much longer process. It was in that sense, he maintained, that history itself—previously understood in terms of clashing political systems—was coming to an end: Democracy had prevailed. A second and equally influential argument came from the “democratic peace” theorists, who showed that democracies tend not to fight one another. It followed, then, that the proliferation of democracies should make the world a more peaceful place.
These two arguments appealed to the liberal side of the political spectrum, but they meshed surprisingly well with a third put forward by the advocates of neoconservatism. Once liberals themselves, the neoconservatives had abandoned that cause on the grounds that détente, which other liberals supported, was perpetuating the Cold War. After the Cold War ended, however, liberals and neoconservatives agreed that democracy’s advance was irreversible. They did so with all the confidence that comes from turning a historical trend into a theoretical proposition.
What, though, were the implications for American foreign policy? The liberals, influential within the Clinton Administration, were content simply to let democracy triumph: If this was an inevitable outcome, why should the United States exert itself to bring it about? The neoconservatives objected to this passivity, calling for more aggressive action to speed up the process. In that sense the liberals were like the Mensheviks in 1917: Marx had shown that the forces of history were going to overthrow capitalism, so why not just wait for that to occur? The neoconservatives, in contrast, resembled the Bolsheviks: They wanted to jump-start the engine of history.
Both groups thought they knew, from the trajectory of the recent past, what the future would bring. And like most people in history who have sought to eliminate surprises from it, both were in for yet another surprise.
A Crisis for Theory
It came, of course, on September 11, 2001. By the end of that morning, it was not at all clear that democracy was the wave of the future: How could it be when the actions of so few had damaged so many? How many more such attacks would it take for democratic institutions in the United States to buckle, and if that happened, what prospects would there be for democracy elsewhere? Democracy was facing its gravest crisis in half a century—or so it seemed on that traumatic day.
The Bush Administration until that point had embraced neither the liberal nor the neoconservative viewpoint, but now it had to act. It wound up tilting toward neoconservatism, not because of some premeditated conspiracy, as many of its critics have claimed, but rather because of an unexpected military victory. In pursuit of the perpetrators of 9/11, it invaded Afghanistan successfully, a feat that had eluded would-be conquerors of that country going back to Alexander the Great. It’s easy to forget now what an astonishing development this was—all the more so because nobody had planned it ahead of time.
Suddenly, it seemed, there might be an opportunity to speed up history: The Taliban had collapsed, after all, with only a slight push. So Bush and his advisers began planning to fight a war against terrorism by democratizing the Middle East, the one part of the world where that system had not yet taken root. Toppling a few more tyrants might be all that it would take to get this process going.
This was not the only reason for starting a war against Saddam Hussein in March 2003, but it was one of the most important assumptions that made that operation seem feasible. Not for the first time the illusion took hold that the next war would resemble the last one. Certainly the belief that democratic instincts lay hidden within Iraqi culture, waiting to be let out, accounted in large part for the Administration’s failure to plan the occupation—for its belief that it need only shatter the status quo there, and the pieces would automatically rearrange themselves in pleasing patterns.
By the time President Bush was re-elected in November 2004, it was clear that this had not happened. The costs of the war in Iraq, for the Iraqi people and for the liberators who had become their occupiers, had turned out to be much greater than expected, and the Administration had no clear sense of what to do. But American presidential elections are rarely decided on issues of foreign and military policy; on the more sensitive matter of national security, the President could point out that there had been no more terrorist attacks on American soil, a not-insignificant achievement.
That was the context, then, in which Bush got to make a second Inaugural Address, and to proclaim a doctrine. It was meant to respond to an immediate crisis, which was the failure of his policy in Iraq. But it also addressed a crisis within the realm of ideas: Was the 20th-century idea of promoting democracy the appropriate long-term objective for the United States in the 21st century? The answer, Bush suggested, went back to the 19th, even the 18th century.
A Return to History
The call to end tyranny seemed new in 2005 only because it was old—considerably older, in fact, than the goal of promoting democracy. The Declaration of Independence did, to be sure, make the radical claim that “all men are created equal.” But as anyone who has read all of that document knows, it is chiefly about liberation from tyranny, the improbable tyrant being George III, against whom Jefferson marshaled a list of offenses “scarcely paralleled in the most barbarous ages, and totally unworthy [of] the head of a civilized nation.”
When Alexander Hamilton, in the first paragraph of the first Federalist, set out to explain what the Constitution of 1787 was all about, he put it this way:
To decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.
By “reflection and choice”, Hamilton didn’t mean democracy as we understand that term; he was even more skeptical about that form of government than was his collaborator Madison. But Hamilton did mean freedom from tyranny. For what was tyranny, in an age of inherited monarchies, if not rule by the alignment of accident with force?
The Founding Fathers saw themselves as having seized a beachhead for liberty in a world run by tyrants. But as Robert Kagan has recently emphasized, they knew that the beachhead would have to expand if it was to be secure. This meant dominating the North American continent, so that liberty could align itself with power. It also meant propagating the first international revolutionary ideology, one that called, in a more distant future, for the overthrow of tyranny throughout the world.55.
Kagan, Dangerous Nation: America’s Place in the World from its Earliest Days to the Dawn of the Twentieth Century (Knopf, 2006), especially pp. 4, 16, 37–8, 43–6.
America “goes not abroad, in search of monsters to destroy”, John Quincy Adams famously proclaimed on the Fourth of July, 1821, in a speech that has been quoted ever since to justify noninterference by the United States in the affairs of other nations. The address is rarely read in its entirety, however, so it’s often forgotten that Adams also challenged “every individual among the sceptered lords of mankind” to follow the example of the American Revolution and overthrow their oppressors: “Go thou and do likewise!”
John Quincy Adams [credit: © The Corcoran Gallery of Art/Corbis]
And what if tyrants resisted—even homegrown tyrants? Adams as early as 1819 was demanding “the extirpation of slavery from this whole continent”, even if this required a war that disrupted the Union, “for so glorious would be its final issue, that, as God shall judge me, I dare not say that it is not to be desired.”66.
Quoted in Kagan, Dangerous Nation, pp. 162–3, 181.
It was one thing to impose a particular system by force, which is what the tyranny that was slavery had done; no one, in Adams’s view, could justify that. It was quite another thing to force an end to the tyranny that was slavery. In his willingness to support that possibility, Adams anticipated the views of an even more extreme enemy of human bondage, Abraham Lincoln.
Recent scholarship has stressed that Lincoln never lost sight of the global issues that were at stake in the American Civil War. The Declaration of Independence, he insisted in a speech delivered in the hall in which that document was signed, had not been merely a justification for separating the American colonies from Great Britain. Its purpose had also been “to provide hope to the world for all future time. It . . . gave promise that in due time the weights should be lifted from the shoulders of all men, and that all should have an equal chance.”77.
Quoted in Doris Kearns Goodwin, Team of Rivals: The Political Genius of Abraham Lincoln (Simon and Schuster, 2005), p. 310. See also Richard J. Cawardine, Lincoln (Pearson, 2003), pp. 23, 65, 69–70, 78, 165, 216.
Not necessarily that all should live in a democracy. Not even that equality for all should be guaranteed. But rather that all should have an equal chance. Even if providing them with that opportunity should require the use of force on such a scale that, as Lincoln would say in his second Inaugural Address, “every drop of blood drawn with the lash shall be paid by another drawn with the sword.”
The objective of ending tyranny, therefore, is as deeply rooted in American history as it is possible to imagine. President Bush, in a time of crisis for the future of democratization, followed Lincoln’s example in a much greater crisis for the future of the Union: He looked back for guidance to the Founders. That’s one good reason for thinking that the “end of tyranny” idea may extend beyond the end of the Bush Administration, and into those that will follow.
Two Concepts of Liberty
But what about feasibility, the other test I mentioned for assessing the durability of presidential doctrines? Why should the goal of ending tyranny work better than that of spreading democracy? Isn’t it the fate of all who think they know what’s best for the world to find that the world doesn’t share their vision, fears their arrogance, and will sooner or later frustrate their ambitions?
Precisely so, but here’s where there’s a difference between these two objectives. Spreading democracy suggests knowing the answer to how people should live their lives. Ending tyranny suggests freeing them to find their own answers. The Oxford philosopher Isaiah Berlin best explained this distinction half a century ago in his great essay “Two Concepts of Liberty.”
Positive liberty, as Berlin defined it, begins with the idea that you know what’s best with such certainty that you seek to impose that view on others, at first through persuasion, and then if that doesn’t work, by example, and then if that doesn’t succeed, by coercion. What gives you this certainty is the belief that you’ve figured out how history works: You’ve developed a theory that provides a single sweeping solution for a world full of problems.
That’s not so dangerous if all you’re doing is driving a taxicab, from which your customers can exit at the next stoplight. It’s more so if you’re in a position to shape the minds of others, whether through teaching, writing or rabble-rousing. And it can be deadly if you’re running a powerful state, for the greatest tyrannies of the modern age originated with leaders who insisted on a “one size fits all” ideology. Whether it was Lenin, Stalin, Hitler, Mao or Pol Pot, they all began by promising liberty—as long as they got to decide what it was and how to get it. They believed themselves entitled, as Berlin put it,
to coerce men in the name of some goal . . . which they would, if they were more enlightened, themselves pursue. . . . ‘I know what they truly need better than they know it themselves.’
Negative liberty, in Berlin’s formulation, makes no such claims. Instead, it maintains
absolute barriers to the imposition of one man’s will on another. The freedom of a society, or a class or a group, is measured by the strength of these barriers, and [by] the number and importance of the paths which they keep open for their members.
The idea behind negative liberty is to restrain authority. The idea behind positive liberty is to concentrate it. These are, Berlin concluded, “two profoundly divergent and irreconcilable attitudes to the ends of life.”88.
Berlin, The Proper Study of Mankind: An Anthology of Essays, Henry Hardy and Roger Hausheer, eds. (Farrar, Straus & Giroux, 1998), pp. 204, 237.
What is clearer now than it was in 1958, when Berlin wrote this essay, is that negative liberty commands more support—or, to put it in Clausewitzian terms, it generates less “friction”—than the claim of one to know what is best for all. The totalitarian tyrannies of the 20th century collapsed because their single solutions promised liberty but failed to provide it. Democracies survived and spread because they allowed experimenting with multiple solutions. Not all of these worked, but enough did to give government by “reflection and choice” a far better track record by the beginning of the 21st century than rule by “accident and force.”
Here, though, is where history played a nasty trick. The end of the Cold War left the United States in a position of dominance unrivaled since the days of the Roman Empire. Maintaining humility under such circumstances would have demanded the self-discipline of a saint—and the Americans, like the Romans, have never been particularly saintly. So all at once their efforts to encourage democracy, which had come across during the Cold War as constraining the power of dictators, now looked like an effort to concentrate power in their own hands.
When Wilson spoke of making the world safe for democracy in 1917, that form of government was in peril. But when Clinton described the United States as “the world’s indispensable nation” in his second Inaugural Address in 1997, there were no obvious dangers on the horizon. What was the basis, then, for American indispensability? And after threats did unexpectedly arise, on September 11, 2001, a wounded nation that was still the most powerful nation began insisting that its future security required the expansion of democracy everywhere. No wonder this frightened people elsewhere, even those also frightened by terrorism.
President Bush reflected this “one size fits all” mentality when he called for “the growth of democratic movements and institutions in every nation and culture.” That sounded like knowing what was best for the world. But then he added: “with the ultimate goal of ending tyranny in our world.” That sounded like liberating people so that they could decide what was best for them; it was language of which the Founding Fathers, John Quincy Adams, Abraham Lincoln and Isaiah Berlin might have approved. So the President managed to compress, into a single sentence, the concepts of both positive and negative liberty.
This may have been a triumph for succinct speech writing, but it was not one for philosophical coherence. Promoting democracy, for the reasons I’ve mentioned, offers no guarantee of ending tyranny, just as ending tyranny offers no guarantee that the newly liberated will choose democracy. Telling people simultaneously that we know best and that they know best is likely to confuse them as well as us. But what if we were to read the President’s sentence as a political rather than a philosophical statement, as a way of respecting the recent past while shifting priorities for the future? A presidential speech, after all, cannot simply dismiss what has gone before, even as it suggests where we should now be going.
If the Bush Doctrine was meant in that sense—if ending tyranny is now to be the objective of the United States in world affairs—then this would amount to a course correction away from the 20th-century idea of promoting democracy as a solution for all the world’s problems, and back toward an older concept of seeking to liberate people so they can solve their own problems. It could be a navigational beacon for the future that reflects more accurately where we started and who we’ve been.
Making Choices
It could be—but sometimes a speech is just a speech. If Bush meant to shift the direction of American foreign policy, he and his advisers have since been remarkably quiet about it.99.
The most recent authoritative expression of Administration thinking, Condoleezza Rice, “Rethinking the National Interest: American Realism for a New World”, Foreign Affairs (July/August 2008), makes democracy promotion the top foreign policy objective, while assuming that the collapse of tyrannies will follow. The President did acknowledge, however, that ending tyranny would require “the concentrated work of generations”, and in doing so he implicitly recognized that it’s not just the presidents who give them who determine the significance of presidential pronouncements. How they are remembered is at least as important, and how they are later used is even more so: It’s worth recalling that the Monroe Doctrine was dormant for decades until subsequent Administrations saw fit to revive it.1010.
The standard account is still Dexter Perkins’s three-volume The Monroe Doctrine, published between 1927 and 1937.
I think that future presidents should regard Bush’s second Inaugural as signaling a shift from promoting democracy to ending tyranny, as a call for an overdue correction of course. My reasons go back to another idea Berlin developed in his 1958 essay, which is that there is no such thing as a single good thing. There are multiple good things, and it isn’t always possible to have them all at the same time.
Democracy is clearly a good thing. But so, too, is freedom from anarchy, which is why states five centuries ago—none of them as yet democracies—first began organizing themselves. So, too, is personal security, which is why, even in democracies, we allow the state to use force when necessary to maintain order. So, too, is predictability in one’s dealings with others, which is why democracies have laws enforced by judges who act independently of popular sentiment. So, too, is economic sustainability: Democracy can hardly flourish when people are hungry.
The United States, as a mature democracy, has the luxury of enjoying all of these advantages simultaneously, but this was not always so. As Zakaria points out, democracy established itself in this country only after these other safeguards had been put in place, and it took even longer for this to happen in Great Britain, the country that invented representative government. Democracy did spread widely in the 20th century, but that was only because the British and later the Americans wielded their power in such a way as to secure its prerequisites, not least by fighting and winning three world wars, two hot and one cold.
Since the Cold War ended, the United States has neglected these prerequisites. There was no clearer demonstration of this than those three Iraqi elections of 2005, in which the citizens of that country risked their lives to go out and vote. That was, in one sense, moving and reassuring, a victory for democracy, you might say. But it was, in another sense, a defeat for democracy, because people should not have to risk their lives to go out and vote. The fact that they did so reflected a failure on the part of the United States, after invading Iraq, to lay the foundations necessary to ensure democracy’s survival there. It’s as if we’d tried to rebuild one of Saddam Hussein’s palaces without first securing its footings: The façade was impressive, but the cracks soon began to appear.
Nor has this error been confined to Iraq. We seem puzzled that democracy is not taking hold to the extent that we hoped it would elsewhere in the Middle East, as well as in Russia, China, Africa, and Latin America. The democratic tide that began rising with the end of the Cold War now appears to have crested and to be receding. But was it ever likely that democracy would root itself in those parts of the world where people fear anarchy more than they do authority? Where the struggle to survive is a more urgent priority than securing the right to vote? Where the immense power of the United States gives rise to greater uneasiness than it does reassurance?
That is why I think a return to our roots is called for. Promoting democracy without its prerequisites can only breed disappointment abroad and disillusionment at home. It suggests that we think we know better than other people do what is best for them—and it too often confirms that we do not. It leaches legitimacy from our priorities.
But only tyrants are apt to defend tyranny. A focus on ending it could move us beyond distracting debates over where democracy can be transplanted and how long this might take, allowing concentration instead upon the single greatest prerequisite for democracy, which, as Franklin D. Roosevelt once reminded us, is freedom from fear. It is from this that all the other freedoms flow.
Since World War II, international law has moved toward recognizing this principle. From the United Nations Universal Declaration of Human Rights in 1948, through the Helsinki Final Act of 1975, through the emergence over the past decade of a widely acknowledged “Responsibility to Protect”, the old assumption that sovereignty shields tyranny has been discredited—whatever the practices of a few regimes like those in Sudan, Myanmar and Zimbabwe. The fact that there are so few suggests the progress already made: A global commitment to remove remaining tyrants could complete a process Americans began 232 years ago.
This, then, should be our standard: to respect the ways in which people elsewhere define their fears, not to impose our own fears upon them. That may mean working with authoritarian regimes when there is more to fear than their authoritarianism—when the trajectory is toward making democracy possible, even if it’s still a long way off. But it also requires resisting regimes—and terrorist movements—whose course lies in the opposite direction: toward making themselves the source of all fears, rather than the safeguard against them. Tyranny is being enslaved to fear, and it will be quite enough, for the next few decades at least, to secure emancipation.
If, therefore, we Americans can adjust our compass heading, if we can make ending tyranny once again our priority, as it was throughout most of our history, then we would have some prospect of getting back on the path that all great nations who wish to sustain their greatness must ultimately follow: that of wielding power without arrogance, by which I mean resisting the illusion that our strength has in all respects made us wise.
Full disclosure: I suggested including the idea of ending tyranny in a session with the President’s speechwriters on January 10, 2005. Correlations, however, are not causes.
2.
Noonan, “Way Too Much God”, Wall Street Journal, January 21, 2005; Will, “The New Math: 28 + 35 = 43”, Newsweek, January 31, 2005; “The Inaugural Speech”, New York Times, January 21, 2005.
3.
Zakaria, The Future of Freedom: Illiberal Democracy at Home and Abroad (Norton, 2003).
4.
Fukuyama, The End of History and the Last Man (Free Press, 1992).
5.
Kagan, Dangerous Nation: America’s Place in the World from its Earliest Days to the Dawn of the Twentieth Century (Knopf, 2006), especially pp. 4, 16, 37–8, 43–6.
6.
Quoted in Kagan, Dangerous Nation, pp. 162–3, 181.
7.
Quoted in Doris Kearns Goodwin, Team of Rivals: The Political Genius of Abraham Lincoln (Simon and Schuster, 2005), p. 310. See also Richard J. Cawardine, Lincoln (Pearson, 2003), pp. 23, 65, 69–70, 78, 165, 216.
8.
Berlin, The Proper Study of Mankind: An Anthology of Essays, Henry Hardy and Roger Hausheer, eds. (Farrar, Straus & Giroux, 1998), pp. 204, 237.
9.
The most recent authoritative expression of Administration thinking, Condoleezza Rice, “Rethinking the National Interest: American Realism for a New World”, Foreign Affairs (July/August 2008), makes democracy promotion the top foreign policy objective, while assuming that the collapse of tyrannies will follow.
10.
The standard account is still Dexter Perkins’s three-volume The Monroe Doctrine, published between 1927 and 1937.