The capture of Osama Bin Laden this past year took discipline, interagency cooperation, the artful use of new technology, willingness to accept risk and steely nerves. I recommend we apply these same skills and fortitude to an equally challenging task: the effective management of U.S. government secrets.
So far the U.S. government has been trying but failing to manage this seemingly far simpler task. Senior policymakers seem unwilling or unable to give the subject the attention it deserves. Unfortunately, there will be serious consequences for such inattention. We are riding a tiger: an outdated classification system that threatens either to overwhelm us with data or to deliver vital secrets to our adversaries. Fortunately, fixing this shouldn’t be too hard.
The first and most difficult step is to figure out what’s wrong. For example, some experts have argued the key issue is excessive secrecy, which poses a round-about security threat because it dumbs down policy debates, denies Americans knowledge and thus undermines accountability. In addition to dredging up surprising examples of flawed classification decisions of the past (such as the World War II-era assessment of the number of annual shark attacks on humans, a number not declassified until 1958), they point to the issue of sheer numbers: In 2010, government agencies reported 224,734 new U.S. government secrets, an increase of 22.6 percent over the prior year. (Numbers are from the Information Security Oversight Office of the National Archives and Records Administration, “2010 Report to the President”, pp. 8–9.)
Such new secrets, what we call original classification decisions, are just the seed corn. Based on these new secrets and all prior ones, 2010 saw 76.6 million derivative classification decisions, involving the incorporation of existing secrets into new documents, videos, speeches and other products. This total was almost double the number of such decisions just two years earlier. A tsunami of electronic secrets is growing exponentially, threatening to break over U.S. taxpayers’ heads. When it does, the size and cost of managing a rescue could prove very painful. At one intelligence agency alone, the growth of classified records is approximately 1 petabyte (1 million gigabytes) every 18 months. According to the information security oversight office at the National Archives, it takes two full-time employees one full year to review just one gigabyte of data. Where is the U.S. government going to find two million full-time employees to review one petabyte, let alone the 18 petabytes or more generated by all our national security agencies?
The costs associated with keeping this backlog are also mounting. According to the U.S. National Archives, just keeping secrets classified cost more than $10.17 billion in 2010 and continues to skyrocket. The costs of declassification will be even greater.
The problem has led some experts to argue for more bureaucracy. The authors of one report see skewed incentives and incompetent or wrong-headed officials at the core of the problem.1 To fix things, they advocate introducing new processes and people to strengthen oversight of every one of those 76.8 million derivative classification decisions. Overclassifiers, once found, would be punished with fines.
Compelling as the research behind them may be, such proposed reforms will not work. In the first place, how do critics know that the number of secrets is too high? What’s the right number? The nation is arguably at war, has experienced an extraordinary threat to the homeland, and is in the midst of a technological revolution that is rendering our competitive edge increasingly difficult to maintain.
In the second place, intelligence agencies are sharing secrets with first responders, law enforcement agencies and allies, as we demanded they do after September 11, 2001. The more secrets that are shared among those with a need to know (a good thing), the greater the number of classified products. This is not necessarily a bad thing. Even if the U.S. government overclassifies, how do critics know that the classifiers are incompetent or untrustworthy? For a hint at what the real problem is, note the well-run process for pre-publication review of text written by those holding security clearances. Most submissions are from people who honestly believe they have written unclassified text, yet reviewers argue and submitters appeal because the whole system is terribly confusing. The problem is not just or even mainly the volume of classifiable information and the speed with which it is being generated and shared, but confusion and lack of control at the core of the entire process.
These procedural hassles are concerning because our national security demands greater agility at a time of lower government spending. Adding new watchdogs to police the classification system would be unnecessarily expensive and burdensome. It would also alienate the very decision-makers who need secrecy to secure U.S. interests in a dangerous world. Policymakers are likely to resent the suggestion that they deliberately overclassify. Secrecy is meant to protect and enable their work, not stymie them with distractions like paperwork and threats of penalties.
Besides, everyone inside government knows how hard it can be to discern what has been formally declassified and what remains under wraps. For years the budget for the State Department’s Bureau of Intelligence and Research went to Congress in an unclassified part of State’s budget submission, even though it was also included as a secret piece of the U.S. intelligence budget. Today, what is secret in one agency may not be in another.
The recent Thomas Drake case is a more contentious example. It was hard for many national security insiders to fathom that information on a new software capability for the National Security Agency would not be classified.2 Apparently, much of it wasn’t. At the same time, however, some unclassified information is still considered “sensitive.” Security professionals, who pursued Drake into court, seem now to be vilified in the press. But should they really be faulted? Security and law enforcement professionals have a huge problem identifying insider threats, given that the courts and Congress have done a poor job of streamlining our conflicting secrecy laws. In addition to espionage statutes, these laws cover special categories related to defense information, Restricted Data, “sensitive but unclassified” information, and Formerly Restricted Data (which, in a lovely misnomer, is actually still “restricted”). It is not surprising, given the legal tangle alone, that even professionals are confused about our secrecy rules.
The goal must be to make classification and declassification easier, simpler, less costly and more responsive to the needs of national security professionals and the citizens they work to protect. Americans appreciate the need for secrecy. As most counterintelligence professionals can tell you, it is the inability to control secrecy (and to employ it selectively) that is the serious vulnerability. The success of the bin Laden raid depended on extraordinary secrecy and, in its aftermath, extraordinary revelations about how that raid was conducted, who was actually killed, and how bin Laden was treated. Selective secrecy, enabled through Executive Orders authorizing rapid classification decisions and equally rapid discretionary release, was critical to completing the mission and damping down blowback in its aftermath. Yet controlled release is the least used of any instrument in the policymaker’s toolkit. We are much better at hiding than at revealing, and we thus disable ourselves in public diplomacy and information wars. The lesson should be clear: The choice between what to hold close and what to give away, and when, is the key to an agile and successful secrecy system. Americans want that kind of agility, but at less cost, and with their President in control.
What to Do
Fortunately, technology offers solutions for both the front and back ends of the classification system. As you know, the CIA has already found ways to embed metadata, which provides a kind of index to content, in their products. Readable by machines, metadata allows documents to be sorted or binned for declassification purposes. If all national security agencies did this for new documents, automatically “binning” documents in a secure “cloud” for prioritized review, the current system’s management problem would be much alleviated.
Yet this kind of incremental change is not enough. Technology offers an even bigger prize: massive document processing that both expedites release and improves security. New forms of artificial intelligence permit more than “dirty word” searches during declassification reviews. New software programs permit bulk review of documents on the basis of the context in which words appear, distinguishing for example between “spy George”, which might be classified, and “I spy George”, which presumably would not. Machines enabled with this kind of software could be programmed according to national security guidelines—the same kind their human handlers use—and applied government-wide, subject to human review.
Such a government-wide system would learn as it goes, factoring in the new or original classification decisions as well as the latest declassification ones instantaneously, ensuring that national security users stay alert to every opportunity for openness in government. Every time such a decision-maker creates a new classified document, an auto-check would compare the document to the government-wide standard, alerting him to decisions that seem out of whack, presenting an opportunity for immediate resolution. For the first time, declassifiers and classifiers would be linked in service both to national policymakers and the American public. We might even imagine that this kind of agility would empower public diplomacy as authorities gain firmer control of material for discretionary release, thus strengthening the American case when erroneous charges are made, such as they were after the bin Laden operation.
Counterintelligence officials would also enjoy a windfall, now able to see everything our adversaries know as they lap up our leaks and declassified secrets. In the new age of open source intelligence, such insights into what we have revealed to others through declassification processes could be illuminating, give us a competitive edge, and also help us avoid releasing a seemingly innocuous fact that completes a picture for a dangerous adversary. For example, we used to release seemingly innocuous information about radioisotopes produced during nuclear tests until we realized that what we were releasing incrementally might collectively provide insights into nuclear warhead design. The Department of Energy subsequently classified the data in order to get its secrets back under control. The point is that U.S. officials didn’t then, and don’t now, have a good grasp of what is in the public domain. Consideration of context can help them gain it. Techies call solutions like this “trivial” both in sophistication and expense. Doesn’t it make sense, then, to invest in a testbed to see if it might work?
If you, as Director of National Intelligence, were to make such a recommendation to the President, you wouldn’t be alone. The Public Interest Declassification Board, which also reports to the President, has been investigating steps such as these over the past two years. I recommend that you ask them to brief you, and then to brief the President. He is, as you know, committed to open government, improved security and lower government spending. Why not give him a vehicle to help him realize this vision?
1See Elizabeth Goldstein and David M. Shapiro: “Reducing Overclassification Through Accountability”, Brennan Center For Justice at the New York University School of Law, October 5, 2011.
2In April 2010, a U.S. Grand Jury indicted Thomas Drake, a former employee of the National Security Agency, for unlawfully retaining U.S. defense information about a failing collection program known as Trailblazer, and revealing its existence to a newspaper reporter. Drake’s supporters claimed he was only exposing government waste and privacy violations to a respected journalist. U.S. prosecutors argued that Drake mishandled sensitive information and, in doing so, damaged national security. In the end, he was exonerated.