Four years ago, President George W. Bush warned of the grave and growing danger posed by rogue regimes that seek to acquire nuclear weapons. “States like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world.” America, the President made clear in his January 2002 State of the Union address, would do whatever was necessary to defeat this threat: “I will not wait on events, while dangers gather. I will not stand by, as peril draws closer and closer. The United States of America will not permit the world’s most dangerous regimes to threaten us with the world’s most destructive weapons.” While President Bush did not explain how the United States would counter this rising danger, it was soon evident that the Administration believed that preventive military force would have to be at the core of any successful strategy.
The decision to invade Iraq, justified by the need to prevent Saddam Hussein from acquiring weapons of mass destruction, was a clear manifestation of this new strategy. It may well have been the last. The failure to find an active nuclear weapons program or any stockpiles of biological or chemical weapons, combined with the inability to secure the post-Saddam peace, all but rules out future military endeavors of this kind. Because of the disaster in Iraq, gaining an American, let alone an international, consensus in favor of another preventive war will be close to impossible. The Administration, accordingly, has opted for more conciliatory, multilateral negotiating strategies to address the nuclear threats posed by Iran and North Korea, the other two members of President Bush’s “axis of evil.” Despite ritualistic exhortations that “all options remain on the table”, the Administration declined to use force even after it was clear that North Korea was reprocessing spent fuel sufficient for six or more nuclear bombs.
While many will greet the apparent death of preventive force with relief, it would be unfortunate if the entire concept were abandoned. For the problem with the Bush strategy has been less the concept of preventive force itself than its near — unilateral application to achieve very ambitious — perhaps too ambitious — ends. Unilateral, preventive wars of regime change should be relegated to the past. But circumstances will undoubtedly arise in the future in which policymakers will want to have the option of using force preventively — be it to kill terrorists, prevent weapons proliferation, halt genocide, stop the spread of deadly diseases, or deal with other kinds of danger. The proper task, then, is not to bury the concept, but to make it a more limited and a more legitimate tool for addressing evolving security threats.
The Doctrine of Preventive Force
The preventive force doctrine, most prominently enunciated in the Bush Administration’s National Security Strategy of September 2002, touched off a vigorous debate in the United States and around the world over whether and when it is appropriate to use force other than in response to a direct attack.1 In particular, the notion that a state could use force to thwart threats that, as Bush said, had not yet “fully formed”, was seen as a dangerous departure from internationally agreed rules governing the use of force. Those rules, which are enshrined in the Charter of the United Nations signed 60 years ago, place a premium on avoiding cross-border aggression, which, in the wake of history’s most destructive conflict, was seen as the preeminent threat to peace and security. Article 2(4) of the Charter accordingly proscribes the use of force: “All Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.” The Charter recognizes only two exceptions to this proscription: “the inherent right of individual or collective self-defense if an armed attack occurs against a Member of the United Nations” (Article 51), and the use of force authorized by the Security Council in order to maintain or restore international peace and security (Chapter VII).
Of course, since 1945 states have observed these rules mostly in the breach. States have used force without explicit UN Security Council authorization or any plausible self-defense rationale many times-by one count at least 200 times prior to the end of the Cold War. But throughout this period, there was widespread consensus that force should not be used preventively except when an armed attack was truly imminent. Thus, while many saw Israel’s commencement of hostilities in 1967 as justified in view of the prior mobilization efforts and blockade by Arab governments, its strike against the Iraqi nuclear reactor in Osirak in 1981 was strongly condemned by the Security Council (including by the United States) as a violation of the Charter’s proscription of the use of force.
Since the end of the Cold War, however, the bar against preventive uses of force has begun to fall. The collapse of the Soviet Union reduced the chance that using force would escalate to a superpower conflict, making the risk of acting preventively more acceptable. The failure of the Security Council to react effectively to the grave humanitarian crises in Bosnia and Rwanda helped legitimate the concept of “humanitarian intervention”, even though nominally the Security Council’s authority under Chapter VII is limited to threats to peace and security. That failure to act also bolstered the willingness of NATO members to use force even without the Council’s authorization, as they did in Kosovo.
The 1990s further witnessed uses of force against what were increasingly seen as the dominant threats to security in the post-Cold War environment — the proliferation of weapons of mass destruction and terrorism. In 1994, the Clinton Administration threatened to attack North Korea’s nuclear facilities unless Pyongyang abandoned its nuclear weapons program. Four years later, the Administration launched military strikes against the Al Shifa chemical plant in Sudan and a terrorist training camp in Afghanistan following the bombing of U.S. embassies in Africa, one goal being to kill terrorists before they could attack the United States.
Following the 9/11 attacks, President Bush turned what until then had been a tool of American policy into a guiding strategic doctrine. The reason for doing so rested on three arguments. First, the Administration maintained that the key actors (rogue states and terrorists) now threatening America were fundamentally different from the traditional state adversaries the United States had long confronted. “Deterrence”, Bush explained, “means nothing against shadowy terrorist networks with no nation or citizens to defend. Containment is not possible when unbalanced dictators with weapons of mass destruction can deliver those weapons on missiles or secretly provide them to terrorist allies.” In this new security environment, safety could no longer be assured by the ability to defeat threats after they had formed. “If we wait for threats to fully materialize, we will have waited too long.”
The second reason for relying on preventive force was the catastrophic cost of misjudging the imminence of the threat. “We don’t want the smoking gun to be a mushroom cloud”, argued Condoleezza Rice. Whatever the costs of lowering the barrier to using force preventively, the Administration argued, they were outweighed by the dangers of waiting too long to act. As the National Security Strategy put it,
The greater the threat, the greater is the risk of inaction-and the more compelling the case for taking anticipatory action to defend ourselves, even if the uncertainty remains as to the time and place of the enemy’s attack. To forestall or prevent such hostile acts by our adversaries, the United States will, if necessary, act preemptively.
Finally, the United States would have to embrace a preventive strategy because collective security mechanisms had failed to provide sufficient assurance that they would act in an effective and timely manner, increasing the need for the United States or ad hoc coalitions to respond instead. This unwillingness to rely on the Security Council was rooted in frustrations about Council action-or lack of it-in the 1990s, and a more general skepticism among Administration officials about the value of the UN as a guarantor of international security.
The UN Response
The promulgation of the new U.S. strategic doctrine, which was followed in short order by the decision to invade Iraq, represented “a fundamental challenge to the principles on which, however imperfectly, world peace and stability have rested”, as Kofi Annan told the General Assembly on September 23, 2003. Among the most important of these principles was that, when states “decide to use force to deal with broader threats to international peace and security, they need the unique legitimacy provided by the United Nations.”
The real question this development raised for Annan, however, was less the willingness of certain states to live up to this precept than whether the rules governing the use of force developed in the wake of World War II were still applicable in today’s world. So the UN Secretary — General appointed a High-Level Panel of former statesmen-including Brent Scowcroft, Qian Qinchen, Yevgeny Primakov and Gareth Evans — to answer this and related questions.
The High-Level Panel’s December 2004 report represented an important evolution on the critical question of whether and when to use force. The panel settled the issue of whether the right to self-defense includes a state’s right to use force preventively when faced with an imminent attack by arguing that it does. As to threats (like terrorism and weapons proliferation) that are not imminent but that are grave and perhaps growing, the panel concluded, “If there are good arguments for preventive military action, with good evidence to support them, they should be put to the Security Council, which can authorize such action.”
Indeed, the panel argued that the Council could authorize force against a state under any of several circumstances as long as it believed such action to be necessary for maintaining or restoring international peace and security. These circumstances include
whether the threat is occurring now, in the imminent future or more distant future; whether it involves the State’s own actions or those of non-State actors it harbours or supports; or whether it takes the form of an act or omission, an actual or potential act of violence or simply a challenge to the Council’s authority.
Yet, while arguing that there is a broad range of circumstances under which force might be used, the High-Level Panel declined to endorse the Bush Administration’s claim that under any of these circumstances states could act on their own. That, it argued, was a recipe for international anarchy rather than international order: “Allowing one to so act is to allow all.”
The panel’s views were broadly endorsed by the UN Secretary-General himself, but two critical issues were left unresolved. One is the issue of imminence. Both the High-Level Panel and the Secretary-General maintained the distinction between threats that are imminent, which states have the right to address themselves under Article 51, and threats that are latent, against which force can be used preventively only if the Security Council so authorizes. This assumes that the distinction between imminent and latent threats, which applied at a time when armed attacks required the mobilization of mass armies, still applies today. But does it?
In a globalized world threatened by weapons of mass destruction and terrorists with global reach, this distinction loses much of its strategic meaning. Once a country has acquired weapons of mass destruction, it can decide to use them with little or no warning, either by sending them aloft on airplanes or long-range missiles, or by handing them to terrorists to use at a time and place of their choosing. The very possession of weapons of mass destruction by some countries, moreover, can be used to blackmail neighbors or otherwise enable aggression even without such weapons ever actually being used. And so the Bush Administration was correct to insist, “We must adapt the concept of imminent threat to the capabilities and objectives of today’s adversaries.” So long as the threats states face are unconventional, relying on the conventional distinction between imminent and latent threats makes little sense.
The second issue left unresolved by the High-Level Panel is what to do if the Security Council fails to authorize preventive action when some states believe it is necessary. This is not a theoretical possibility. As the High-Level Panel acknowledged, “The Council’s decisions have often been less than consistent, less than persuasive and less than fully responsive to very real State and human security needs.” It acted late in the case of the former Yugoslavia, ineffectively (so far) in response to Darfur, and not at all during the genocide in Rwanda. It has refused to take up the matter of North Korea’s non-compliance with the nuclear Non-Proliferation Treaty and wants no part in deciding how to address similar compliance concerns with respect to Iran. Indeed, the list of failures by the Council to act promptly and forcefully to maintain or restore peace and security around the world is long and growing.
The various proposals by the High-Level Panel and Secretary-General Annan to make the Council a more effective and responsive body are not likely to make it so. Even if it were possible to reach agreement on enlarging the Council (which, evidently, it is not), adding more members will only further impede the Council’s ability to reach consensus. Agreeing to new guidelines for deciding whether to authorize force-based on principles drawn from the just war tradition like the seriousness of the threat, the purpose of the proposed action, the plausible success of alternative means to defeat the threat, the proportionality of the military response and the likelihood of success-would be useful. But their adoption by the Council, as Annan has urged, is unlikely to change matters much because key members will continue to perceive threats to international security in different ways. A country like the United States, with its global responsibilities and interests, will surely view new security challenges as more serious than those countries that have narrower interests and responsibilities.
The same differences, moreover, will apply to judging the applicability of new guidelines to specific cases. Proposals to reform Security Council membership and practices will change little. While having agreed normative standards is important, the ultimate determinant of Council action or inaction will always be the political decision-making processes in differently minded and differently situated member countries.
Sovereignty and State Responsibility
These difficulties point to a more fundamental problem with the existing UN-based rules governing the use of force. These rules are based on two key principles that were the product of a particular era, the end of World War II and the start of decolonization: first, that states are sovereign equals, and second, that they should not interfere in each other’s internal affairs. The changes in the international environment of the past six decades have eroded the applicability of these foundational principles and thus rendered the rules based upon them untenable.
With regard to the first principle, sovereignty is being eroded both from within states and from without. Many states are too weak to control what happens within their own borders, a situation that can have dire consequences for all. “Weak states”, the Bush Administration has rightly argued, “can pose as great a danger to our national interests as strong states.” In addition, the rapid advance of globalization challenges the ability of states to control their own frontiers, such that a development in one corner of the globe can pose an imminent danger almost anywhere, including the U.S. mainland. That, after all, is what September 11 was all about. Finally, key actors on the world stage — terrorists, nuclear technology traffickers, international criminal cartels, multinational corporations, non-governmental organizations — are powerful and purposeful, but they are decidedly not sovereign.
There is, in short, much more today to international relations than the interaction of sovereign states. That is a profound departure from the world of 1945 with many significant implications, not least concerning the main source of the threats we now face. Just consider that the last three wars the United States has fought responded to how particular states behaved regarding matters within their borders rather than what they did beyond them. The Kosovo war was about protecting the Albanian minority from ethnic cleansing by Serb forces; the Afghanistan war was about the Taliban providing a sanctuary to al-Qaeda; and the Iraq war was about the purported development of weapons of mass destruction by Saddam Hussein’s regime.
The UN system was not set up to deal with these types of threats, given that it stresses both the sovereign equality of states and the principle of non-interference in their internal affairs. So it is not surprising that it has proven difficult to gain consensus within the Security Council, let alone among the wider UN membership, both on what constitutes the new threats and how best to respond to them. That is why there was no explicit Security Council authorization for the Kosovo and Iraq wars, and only an implied authorization for using force against Afghanistan. That is why there has been no agreement on what to do with regard to Darfur, despite an international finding that the humanitarian situation is very grave and repeated, post-Rwanda exhortations that the international community will “never again” stand by as genocide unfolds. That is why there has been no agreement on imposing sanctions or any other punitive action in regard to North Korea’s violation of the Non-Proliferation Treaty, nor any Security Council response to the discovery that a Pakistani scientist (with or without official conniving) for years ran a veritable nuclear Wal-Mart, selling his knowledge and wares to anyone willing to pay.
In short, the UN’s concept of the international system no longer accords with the world as it now exists. That means that the rules regulating the use of force must be adapted to the world we do live in — a world in which sovereignty is increasingly conditional on how states behave internally, and in which the need to intervene in the internal affairs of states is growing accordingly.
In recent years we have seen the emergence of a new norm of state responsibility. The first step in this direction was the growing recognition that states now have a responsibility to protect their own citizens from genocide, mass killing and other gross violations of human rights. The next step is to extend this norm to other areas. It is increasingly evident that states now also have a responsibility to prevent developments on their territory that pose a threat to the security of others.2Such developments include the failure to secure weapons of mass destruction against theft or diversion; the harboring, supporting or training of terrorists; and environmental dangers like failing to prevent the spread of dangerous diseases or allowing the massive destruction of key biospheres.
This emerging norm of state responsibility raises the important and unavoidable question of what should happen when states fail to meet their responsibilities. The world’s leaders, meeting at the UN’s 60th anniversary summit this year, made clear that when a state is unable or unwilling to protect its own people, then the responsibility for doing so falls on the international community. “We recognize our shared responsibility to take collective action, in a timely and decisive manner, through the Security Council”, the leaders declared, “should peaceful means be inadequate and national authorities be unwilling or unable to protect their populations.”
Similarly, a state’s failure to prevent internal developments that threaten people in other states implies that the responsibility to do so also falls on the international community. And the most effective way to commute that responsibility will often involve preventive action of some kind, up to and including military action. Indeed, the most effective way to defeat many of the new threats is to act before they are imminent — before enough fissile material has been produced to make nuclear weapons; before weapons in unsecured sites or deadly diseases in laboratories have been stolen; before terrorists have been fully trained to hatch their plots; before large-scale killing or ethnic cleansing has occurred; and before a deadly pathogen has mutated and spread sickness and death around the globe.
Of course, in many of these cases military intervention is not the only or the preferred means for dealing with an emerging threat. There are often good alternatives. Take weapons of mass destruction. Deterrence is likely to prevent states from using nuclear weapons or other WMD (though less so when the objective is to prevent their acquisition in the first place). Export controls and other technology denial strategies can be effective as well, especially in the nuclear context, as can interdiction strategies like the new Proliferation Security Initiative. Effective security guarantees have led key U.S. allies like Germany, Japan and South Korea, as well as countries like Ukraine and Kazakhstan, to forgo nuclear self-reliance. Economic sanctions coupled with conditional engagement convinced Libya and may still convince Iran and North Korea to forgo the nuclear option.
At the same time, the threat of force and the actual use of force will sometimes be necessary. And when it is, it is often best used early. The problem with the Bush Doctrine, then, is not that it relies on preventive force too much, but that it has conceived of its use too narrowly with regard both to the threats to be preempted and the decision authority to call a preemptive use of force into being. For the Bush Administration, preemption is necessary primarily to deal with terrorism and as a means of forcible regime change. “The number of cases in which it might be justified will always be small”, warned Condoleezza Rice in a speech to the Manhattan Institute on October 1, 2002. And because its use is reserved for truly exceptional circumstances (“The threat must be very grave. And the risks of waiting must far outweigh the risks of action.”), the decision to use preventive force must remain a purely national one. As the National Security Strategy states, “While the United States will constantly strive to enlist the support of the international community, we will not hesitate to act alone, if necessary, to exercise our right of self-defense by acting preemptively.”
The insistence that individual states — or at least the United States — must have the right to decide when preventive force is justified is, however, problematic when the threats concerned are global in scope and affect the security of many countries. It is also problematic because states will invariably perceive risk in very different ways. Therefore, it is important that there be some agreed international standards and, preferably, some agreed authority for deciding when military intervention against threats that have not fully materialized is legitimate. The Bush Administration has not taken its thinking about preemption to this logical point; it has gone too far in its unilateralism and not far enough in its conceiving of the circumstances in which preventive force might prove desirable.
Standards and Authority for Preemption
The best standard for deciding whether preventive force is justified derives from the concept of state responsibility. When states fail to protect their populations against genocide or large-scale killing, or when they fail to prevent developments within their own territory that will pose a grave threat to the security of others, the responsibility for doing so must rest on others. In deciding whether military intervention in such circumstances is justified, reference to the just war principles of gravity, proportionality, consideration of alternatives and likelihood of success is necessary and appropriate. But the ultimate justification should be the underlying concept that states which fail to live up to their responsibilities lose their right to insist that others not intervene in their internal affairs.
As for an agreed authority, the Security Council remains the preferred vehicle for authorizing such action, not least because since the end of the Cold War it has been seen as the most legitimate forum for deciding questions regarding the use of force in situations other than self-defense. Consider this: Prior to the Gulf War in 1991, the Council had authorized the use of force beyond traditional peacekeeping operations on only two occasions, Korea and the Congo. Since then, it has authorized force no less than 17 times in places around the world. Even in the case of the Iraq war, the Bush Administration argued that war was authorized under earlier UN Security Council resolutions (notably 678, which authorized the Gulf War; 687, which established conditions for a ceasefire of that war; and 1441, which provided Baghdad with one final chance to implement prior resolutions). Yet in practice the Council has not been able in many instances to agree on what internal developments would constitute a threat requiring a forceful response, and it is unlikely to do so in the future.
One alternative to Security Council approval is to accept the legitimacy of preventive interventions approved by regional organizations. The model for this is Kosovo, where NATO intervened to stop a humanitarian calamity even though the Council failed to authorize the action. Regional organizations are a particularly appealing forum for deciding the use of force, since there is likely to be a convergence between those who bear the costs and those who reap the benefits of the action. Moreover, when most or all of the countries in the region reach a similar conclusion as to the necessity of a preventive action, there is a great chance that the decision has a valid factual predicate.
Of course, reliance on regional organizations is no panacea. Some threats are global in scope and thus beyond the purview of any regional organization. There is also the danger that a regional organization may be little more than a pawn of its dominant member. One need only think of the decision of the Association of Eastern Caribbean States to endorse America’s 1983 intervention in the island of Grenada, the role of Russia in the Commonwealth of Independent States or, to a lesser extent, that of Nigeria in the Economic Community of West African States. Regional organizations may also suffer from the same problem of asymmetry that exists in the UN Security Council (consider the problem in Kosovo facing the Organization for Security and Co-operation in Europe). And finally in some cases (East Asia, for example) there may be no meaningful regional organization to authorize a decision to use force.
That leaves the alternative, should the UN or regional route fail, of creating coalitions of like-minded states to legitimate decision-making on the preventive use of force. Since democracies should have a particular interest in upholding the norm of state responsibility, a coalition of democracies might provide such an alternative.3 Given that the governments involved are themselves legitimate by dint of having been elected, their decision to act in concert would carry more legitimacy than a decision of any one of them acting alone.
Moreover, if it proves impossible to convince one’s democratic peers that intervention is justified, that should in and of itself give any national leadership pause about proceeding. The case of Iraq comes immediately to mind, where just three other countries besides the United States initially proved willing to risk their soldiers in the war.
Finally, the knowledge that an alternative decision-making body exists may give the Security Council or a regional organization additional incentive to act effectively in the first place.
Despite the highly polarized debate that emerged in the wake of the Bush Administration’s promulgation of the preventive force doctrine, the underlying logic of the limited use of preventive force in appropriate contexts is compelling. For that reason, no doubt, it is becoming entrenched in practice, if not in the “black letter” of international law.
All policy tools available in international relations have costs as well as benefits, risks as well as rewards. Preventive force is no exception. The use of force under any circumstances should come only after very careful consideration of all the alternatives. In the case of preventive force, the arguments in favor of great caution are particularly strong. Preventive force is neither a magic bullet nor an anathema, but the changing international environment means that some threats simply cannot be addressed by waiting until they become “imminent” as traditionally understood. The stronger the institutional mechanisms and the broader the political support, the more likely that the use of preventive force will be perceived as legitimate and that any adverse consequences will be contained.
Understanding the role of preventive force in the emerging international security environment is only a first step, however. Establishing agreed standards for its use is a second, and embedding those standards in an institutional framework that can function effectively is a third. The Bush Administration has got the first step right, and the logic of its arguments builds toward the second. But it has gotten the third step wrong. Unilateralism is not the only alternative to the UN Security Council — regional organizations and a new coalition of democratic states offer ways to legitimize the use of force when the Council fails to meet its responsibility.
1 The Bush Administration refers to “preemption”, but it is more accurate to call it preventive force—although, as we will argue, the strategic distinction between the preemptive and preventive use of force is disappearing.
2 For an argument confined to weapons of mass destruction, see Lee Feinstein and Anne-Marie Slaughter, “A Duty to Prevent”, Foreign Affairs (January/February 2004).
3 Ivo Daalder and James Lindsay, “An Alliance of Democracies: Our Way or the High Way”, Financial Times, November 6, 2004.