It is such a mixed up, muddled up world we live in these days that verities seem harder than ever to come by. Even death and taxes may not stay inevitable for long, what with clever bioscientists trying to defy the former and the formidable Grover Norquist determined against the latter. So it’s comforting to know that something of interest remains unarguably inevitable, and that something is error–particularly the errors of organizations. The old witticist who said that to err is human, but to really foul up takes a computer, might have added that to inadvertently get lots of innocent people killed generally takes a government or two.
Ultimately, the errors of organizations do come down to the errors of individuals, because abstract nouns don’t literally make decisions–but they come down in no simple way. Context is crucial. Those guilty of poor judgment in one organizational setting might have been wiser in another. Thus we recall Irving Janis’ warning against groupthink and, more relevant than ever, Solomon Asch’s famous 1950s experiments on conformity. Interested in the psychological power of social context, Asch asked subjects to choose one of three vertical lines that matched in length a control line. Unbeknownst to the subjects, Asch planted shills to give close but wrong answers before the subjects were asked to give theirs. Asch wanted to know how many subjects would be influenced to give incorrect answers. Many were: at least one of four half the time, as many as three of four some of the time.
Asch’s data suggested two interpretive possibilities: either the subjects knew they were giving wrong answers but did so anyway to avoid social tension, or they experienced genuine doubt and made genuine errors on account of social context. Asch suspected but could never prove the latter interpretation. Now we know: Gregory Berns of Emory University recently replicated the Asch experiments using MRI brain-monitoring devices to learn which parts of the brain processed respondents’ answers. His research establishes that error in individual judgment can be and, we must surmise, often is induced by social context.
Of what practical importance is this knowledge? The hypothetical implications for understanding leadership dynamics and even the theory of democracy are significant. For the moment, however, let’s limit ourselves to a not entirely hypothetical example from contemporary U.S. foreign policy.
Some people in the Department of Defense, the intelligence community and the White House in 2002 were about as sure as could be that the former Iraqi regime was hiding weapons of mass destruction and probably manufacturing still more. Those people increasingly engaged with like-minded others in government as Iraq became the focus of U.S. policy, and a powerful inferential logic that led many outside observers to suspect the same thing bolstered them further in these views. Those outsiders included intelligence professionals from allied governments, former UN weapons inspectors and Middle East area experts, all of them aware that earlier estimates of Iraqi weapons programs and capabilities had consistently erred on the downside of reality.
Context establishes the parameters for what cognitive psychologists call the “evoked set”, which is harmless jargon for saying that people tend to see what they expect to see. The evoked set helps to explain how ambiguous information gets read in certain ways and not in others. So as time passed and information flows about Iraq were analyzed in the bowels of the U.S. government, the conviction in most quarters concerning Iraqi WMD deepened. If lying went on, it was the sort whereby people deceive themselves. There is no hard evidence of shills deliberately inducing error in others, just professionals who lacked too many data points multiplying error through sheer inadvertence.
So it happened that a batch of problematic intelligence came to the Secretary of State in advance of a major address he was to make before the UN Security Council. Now, this Secretary was an experienced and hardly a credulous man, and so spent three days and most of three nights, much of it at CIA headquarters, examining this intelligence. In the process he rejected much of it, even as others elsewhere in government sought to re-insert rejected claims through various bureaucratic side doors as the review process proceeded. On February 5, 2003, Secretary Powell made the case before the Security Council, swaying many in the United States and abroad to the Bush Administration’s view. And while the essence of his argument was and remains true–a dangerous regime with a bad track record had a will and a way to manufacture WMD–the error of claiming that Iraq was hiding significant stocks of WMD overwhelmed that truth in the court of world opinion.
What happened, in short, is that small-group dynamics helped to generate a world-class intelligence error, and that error in turn, fairly or not, generated a major liability for the Bush Administration. For even if the war was justified for reasons other than the threat of Iraqi WMD, it became much more difficult to make that case–increasingly so as that error became so liberally compounded by other errors in the aftermath of the war.
Beyond the intelligence error that presaged the war, however, lies a more fundamental question: Did the many post-combat errors that followed “have to happen”? This is a much harder question than both critics and defenders of the Bush Administration seem to think, the degree of difficulty varying with the issue:
- Did anyone warn those who decided to dissolve the Iraqi army and deeply de-Ba’athify the Iraqi government of the risks they were taking?
- Did anyone warn that the number of U.S. and allied troops being contemplated for the post-combat occupation was too small and, just as important, that the mix of personnel in place was inappropriate for stabilization missions?
- Why was planning for the post-combat phase so focused on problems that did not arise, and so oblivious to those that did–particularly since our experiences in Somalia, the Balkans and elsewhere should have made us wiser?
- Did anyone foresee the insurgency? And once it arose, did the U.S. military put the right leaders in place to deal with it?
- Did anyone warn against the proconsul model of occupation, which simultaneously united disparate Iraqi opposition groups on the basis of their marginalization and undermined the credentials of Iraqi interim government officials?
- Did anyone foresee the dilemmas of managing detention and interrogation functions in post-Ba’athi Iraq?
- Did anyone understand that basing reconstruction efforts on large projects using mainly U.S. contractors was not the best way to stabilize the Iraqi economy or rebuild Iraqi government ministries?
- On a different level, did anyone in the U.S. government give thought to the larger regional implications of Shi’a political dominance in Baghdad?
The answer to every one of these questions, and others left unasked, is “Yes, but.” “Yes”, invariably, some did understand and warn, whether from inside the government or outside of it–far more volubly than anyone in the U.S. government doubted prewar WMD assessments. It is easy enough to find such warnings in open sources; they are neither scarce nor obscure. Indeed, those warnings were so much in the intellectual mix as events were unfolding that many observers–including some at senior policymaking levels–still cannot quite understand how so many pivotal errors could have been made so quickly. (This is puzzling enough to some that versions of conspiracy theories have arisen to explain it, many of them, like those focused on the long-deceased Leo Strauss as arch villain–far more surreal than the mystery they purport to explain.)
“But”–but those making the decisions did not know of or did not agree with the warnings given and the cases made. Why? Some disagreement was assured because many decisions were genuinely difficult, such that heeding certain warnings might well have given rise to other problems just as serious or more so. But warnings were ignored or dismissed also because of the organizational dynamics of bureaucracy. Two of these are most critical.
First, the filtering effects of groupthink are often intensified when a small coterie of senior decision-makers labors under acute time pressure. It is simply not possible for several dozen people to make key decisions on pressing issues; to have too open a collective mind in fast-moving circumstances–like getting a grip on Baghdad after the fall of the Ba’ath–is a formula for paralysis. What happens is that a rhythm of crisis decision-making develops from necessity, and that rhythm itself becomes something participants strive to protect and maintain. Those inside the decision loop are liable to fear that any significant disruption in this rhythm will invite chaos and calamity. They may be right, too–or they may just be trying to keep bitter bureaucratic rivals out of the room.
This filtering effect is not by definition irrational, but it can lead to irrational outcomes. That is because the insularity produced by closed decision-making loops invites cognitive dissonance, in this case the belief that the decisions must be right if the process runs smoothly (just as, for example, people tend to read more laudatory advertising about expensive products after they purchase them than before). Such dangers are multiplied when the principal decision-maker is excessively resolved not to admit error and to “stay the course.” Excessive rigidity in open-ended, evolving situations drives the flow of decisions into successively narrower confines, which is fine if basic judgments are sound, but not fine if they aren’t. And, of course, the soundness of decisions depends not on the consistency of a process but on the perspicacity of the decision-makers themselves (and, admittedly, sometimes on a little luck in being right for the wrong reasons).
Years will pass before we know exactly who decided what in the first year of the Iraq occupation. We have but a fragmentary sense of the mix and flow of judgments among principals in Washington and Baghdad. We do not even know the precise flow of decision-making in the high drama that committed the U.S. Marines in Fallujah and then, in late April 2004, pulled them back from the edge of victory.
Whatever that mix and flow, the strong consensus is that the errors of the first year of occupation set the United States back so far that we have been trying to recover ever since. Most also grant that the shift of responsibility for Iraq from the Defense to the State Department after the restitution of formal Iraqi sovereignty on June 28, 2004 made a useful difference. The exit of L. Paul Bremer and the Coalition Provisional Authority (CPA) and the arrival of Ambassador John Negroponte and Iraqi Interim Prime Minister Iyad al-Alawi to positions of decision authority–along with General George Casey’s replacement of Lt. General Ricardo Sanchez–bore witness that, indeed, the talents of key decision-makers do matter.
But so does bureaucratic structure, and here we come to the second organizational dynamic mentioned above. The President gave the Defense Department, specifically the Office of the Secretary of Defense (OSD), operational authority for running Iraq after the cessation of major combat. Many have roundly criticized senior OSD officials for botching the occupation, and some have criticized the criticism. This is not the place to assess this debate, not least because an overarching structural point rarely mentioned, let alone debated, takes pride of place: The Department of Defense has little organizational capacity and even less professional enthusiasm for governing foreign countries. Its forte is breaking things and killing bad guys, not cleaning up the consequent mess.
Of course, the U.S. military has had major responsibility for running countries in the past, most notably in the Philippines at the beginning of the 20th century and in Germany and Japan near the middle of that century. Through that experience a standard approach to post-conflict administration did develop. An inter-departmental country team headed by an ambassador was set up, along with a parallel structure running through the local military commander and the normal Pentagon chain of command. This is more or less what the United States did after the fall of the Taliban regime in Afghanistan.
In Iraq, however, this two-line system was never effectively established, partly because the Coalition Provisional Authority lacked local government structures with which to work, and partly because the commander in charge, General Tommy Franks, never fully accepted planning for Phase IV as an integral part of his mission. For practical purposes, then, the joint CPA/Defense Department responsibility for running Iraq was a first-time project for which DoD, in particular, was under-motivated and decades out of practice. And it is an iron rule of bureaucracy that a large organization trying for the first time to achieve a purpose for which it was not designed will screw up. So it did.
If the Department of Defense was not the ideal instrument with which to run post-Ba’athi Iraq, then what was? The State Department, perhaps? The State Department commands total resources smaller than those of any single regional combatant command of the U.S. military. It plainly lacked the physical capacity to run Iraq, and the area expertise as well.1 So what part of the Executive Branch should have been in charge?
Merely to ask the question is to identify the main problem: It’s no one’s job in the U.S. government to rule foreign countries after their regimes have fallen, and what in government is no one’s primary responsibility does not get done well, if at all. As things stand, we are guaranteed to have to haul our tattered behinds up a steep learning curve every time we set out to do such a thing. We are guaranteed to get “the wrong stuff” even if wise men and women make up the inner loop of decision-making from the start, and particularly if they don’t.
Error comes in different shapes and sizes, many of which are annoying but not consequential. I’ve lost count of how many times I’ve read that President Carter pulled the SALT II Treaty from consideration because the Soviet Union invaded Afghanistan in late December 1979. This is not so. President Carter used the invasion as a pretext to withdraw a treaty that lacked the votes for Senate ratification. The Jordanian Civil War that broke out in September 1970 is almost universally referred to as “Black September.” This, too, is wrong. Why would Palestinian fedayeen leaders call September 1970 “black” when they are the ones who initiated the fighting and held the upper hand all that month? No, it was the September 1971 Jordanian army massacre of the fedayeen in the forest of Ajlun that the Palestinians painted “black”, and after which they subsequently named the “Black September” terrorist organization.
These and so many other errors in the historical record irritate, but it would be as futile to try to correct them all as it would be to expect the extinction of cockroaches. Besides, blood will never be shed over such errors, so better to focus energy on detecting errors-in-the-making that could do real harm. And that, alas, brings us back to the adventures of the Bush Administration in the Middle East.
Observers will have noticed a major change of rhetoric from the first to the second Bush term. After September 11, 2001, no Administration foreign policy principal spoke publicly without featuring the global war on terror–the GWOT, a perfectly atrocious acronym sounding like a cross between “squat” and “what?!” In President Bush’s second Inaugural, however, the word “terrorism” appeared not even once. Instead, the President emphasized an American mission to propagate democracy and freedom everywhere. This amounted to the globalization of the Monroe Doctrine, which as policy pronouncements go is a big deal. In purely tonal quality, it was like going from Tony Perkins to George M. Cohan without stopping at Jimmy Stewart in between.
Before the second Inaugural, the democracy theme had already figured prominently in the Bush Administration’s Middle East policy, notably with the President’s November 6, 2003 speech at the National Endowment for Democracy. The policy itself, however desirable its goals, may or may not be a bridge too far–it’s too soon to know. But one recurring theme in the Administration’s argument does suggest, if not outright error, then a reading of history so contestable as to invite a feeling of unease about the entire project, and this from a crowd whose track record over Iraq already inclines one to a certain wariness.
In the November 2003 NED speech the President said, “Sixty years of Western nations excusing and accommodating the lack of freedom in the Middle East did nothing to make us safe. . . . Stability cannot be purchased at the expense of liberty.” This was no one-time extrusion; the President has repeated close variations on this theme at least six times since. He has made a similar argument in a European context as well, specifically condemning Yalta when he visited Latvia for celebrations marking the 60th anniversary of the end of World War II. Secretary of State Condoleezza Rice has made this same theme a mainstay of her own pronouncements on the Middle East. Here is how she phrased it in Cairo on June 20: “For 60 years, my country, the United States, pursued stability at the expense of democracy in this region here in the Middle East–and we achieved neither.” And as a careful reader will recall from foregoing pages in this very magazine, she said it to me on July 25.
Surprisingly–to me, at least–no journalist had ever asked the President or Secretary Rice to detail what this statement actually meant, so I did it myself. Of course, Secretary Rice was right to answer that one cannot and should not second-guess those making decisions many years in the past. We cannot assume, therefore, that she or the President thinks it was a mistake for Washington and London to have restored the Shah of Iran to the Peacock Throne in 1953. We don’t know if either would deny that having had Iran as a pro-Western bastion during a quarter century at the height of the Cold War was a non-trivial stabilizing factor.
There is plenty more we don’t know, too. Should the United States have been pressuring the Egyptian government to democratize in the mid-1970s, even if that would have made impossible the Camp David Accords and the Israeli-Egyptian Peace Treaty–obviously stabilizing developments that worked to our advantage against the Soviet Union? Should we have pressed the Saudi monarchy to expand political participation in the early 1980s, even at the risk of empowering a society far less moderate than the royal regime, and thus probably forfeiting oil pricing policies that underwrote 20 years of sustained international economic growth in the latter stages of the Cold War?
But if the statement does not mean that American statesmen erred in decisions such as these, what does it mean? Merely, as Secretary Rice said, that we allowed the impression to grow that America supported autocrats and enabled them uncomplainingly to stifle legitimate dissent? Who can argue that this impression does not exist? Who does not regret it? But that really isn’t the point.
The point is not that regrettable things were done, for occasional acts of moral impurity in politics are often necessary to avoid or forestall even greater impurities. The real issue is whether those regrettable acts were truly lesser evils among unpleasant alternatives. It is, after all, one thing to argue that the U.S. propitiation of Middle Eastern autocrats after 1991 was unwise (because it was unnecessary), quite another to claim that such propitiation brought no benefits during the Cold War. This is an astonishing claim given the huge strategic and moral stakes then at risk, stakes over which successive American administrations were compelled to make difficult prudential judgments distinguishing greater from lesser evils, binding limited resources to expansive objectives, and modulating their efforts for the long haul. Indeed, their capacity to do all this–essentially to pursue idealist means by realist methods in partnership with like-minded others–is what won the Cold War for the West.
A similar capacity, no doubt, is what will win the war on terror, which is why it is unsettling to hear American leaders in the midst of this struggle seem to misread what won the prior one. Moral clarity is an insufficient basis upon which to win a war or achieve a strategic victory, and to deny this is a confusion of the first order. Mistakes based on such a confusion, compounded in the trenches of crisis decision-making by the distortions of groupthink and bureaucratic dysfunction, could make the errors surrounding the Iraq war seem modest by comparison.
It will be a good day, therefore, when we hear no more about having sacrificed freedom for stability in the Middle East for 60 years, for then we will know that America’s principal statesmen recognize the difficult trade-offs before them for what they are. The best thing that may be said about mistakes is that we sometimes learn from them. But some errors we are just better off doing without.
1 In early 2004 the Office of Policy Planning proposed creating a “green cell” of Iraq experts from outside the government to help the State Department understand what was happening in the country. The Secretary agreed; the “green cell” met usefully several times during 2004 and 2005.