22 June 2016

Is the US Intelligence Community’s Analytic Capabilities Any Better Than It Was Before 9/11?

Peter Mattis
June 18, 2016

Why Tradecraft Will Not Save Intelligence Analysis

Since the failure to disrupt the September 11, 2001 terrorist attacks and the wild overestimate of Iraq’s weapons of mass destruction (WMD) programs, the U.S. intelligence community embarked on a quest to remake analytic doctrine. The focus of this effort addressed concerns about analytic tradecraft, the methods and techniques by which intelligence analysis is produced. Fifteen years on, improving analytic tradecraft became an industry feeding off intelligence services from Washington and Ottawa to Canberra and Bucharest as well as corporations everywhere. However, focusing on analytic tradecraft distracts from the uncomfortable truth that intelligence is not about its producers but rather its users: the ones who rely upon it to make decisions. Kicking the tires of the analytic enterprise is a good thing, but, just as with a car, tires will not help if the problem is the engine. Tires may bound a car’s performance in some basic ways, but they are hardly the most important determinant of how well a car can perform.

Former intelligence officials and scholars of intelligence have long criticized the fixation on organizational reforms whenever intelligence fails, because organizational changes do little to address underlying problems that can derail the intelligence process. Improving analytic tradecraft is much the same. Like structural reforms, analytic tradecraft is internal to an intelligence system. The problem and the solution are manageable without having to go beyond the boundaries of the organization or adopt a different government-wide approach to decision-making. Yet, just because it can be done with minimal fuss does not make it right.

The best analytic tradecraft that neutralized biases, systematized knowledge, and delivered a precise analytic product to decision-makers still cannot save intelligence from decision-makers.. The effort that many all-source intelligence outfits placed on improving analytic tradecraft demonstrates some fundamental misunderstandings of the intelligence process and how decision-makers use intelligence.


First, all-source analysis is not the pinnacle of the intelligence process. Reliable data is. As Sherman Kent wrote, “estimating is what you do when you do not know.” Put another way, analysis is the tool of last resort when efforts to collect the necessary information for decision-making have failed. Regardless of whether analysis sifts signals from noise or connects the dots, analysis attempts to provide by inference what cannot be known directly.

Judgment does not substitute for data. Analysts, by definition, serve in junior positions; they lack responsibility for action. In most countries, analysis is delivered without attribution to the author. Anonymous judgments from junior officials produced by arcane processes probably will not reassure a seasoned policymaker who deals primarily in personal relationships. Policymakers often rise to the positions they are in because they have demonstrated sound judgment and built long-standing relationships with their foreign counterparts. They are incontrovertibly more expert than a junior analyst in their mid-20s. Even if intelligence judgments are overrated, analysts still can help decision-makers find, organize, and appreciate data.

More importantly, intelligence judgments are too ephemeral for potentially costly policy decisions to be made on their basis alone. Policymakers at the top of most intelligence systems have little responsibility and no accountability for the intelligence that reaches them. What they do not control or for which they have no responsibility, they have little incentive in their overworked days to understand. So what are they supposed to think when an anonymous report from an agency without responsibility for policy arrives on their desk seemingly out of nowhere suggesting Iraq is making preparations to invade Kuwait or Iran’s government will collapse?

Former Secretary of State Colin Powell summed up these issues in testimony on intelligence reform before the Senate Governmental Affairs Committee in 2004:

An old rule that I’ve used with my intelligence officers over the years, whether in the military, or now, in the State Department, goes like this: Tell me what you know. Tell me what you don’t know. And then, based on what you really know and what you really don’t know, tell me what you think is most likely to happen. And there’s an extension of that rule with my intelligence officers: I will hold you accountable for what you tell me is a fact; and I will hold you accountable for what you tell me is not going to happen because you have the facts on that, or you don’t know what’s going to happen, or you know what your body of ignorance is and you told me what that is. Now, when you tell me what’s most likely to happen, then I, as the policy maker, have to make a judgment as to whether I act on that, and I won’t hold you accountable for it because that is a judgment; and judgments of this kind are made by policy makers, not by intelligence experts.

Second, methodology is not a substitute for expertise. Expertise is not so much the ability to predict what will happen, but to understand and evaluate what is happening as well as the mechanics through which events and decisions will unfold. Generic analytic methods, such as alternative competing hypotheses and key assumptions check, do not improve an analyst’s ability to interpret intelligence reporting or appreciate its underlying epistemological value. The former requires a rich knowledge of how another political system works and what reporting out of that system signifies. The latter requires more knowledge of how intelligence collection works and the way in which reporting reaches an analyst’s desk than an analyst typically picks up informally.

Concerns about expertise in the U.S. intelligence community go back decades. The National Security Council staff complained about the state of the community with respect to China and the Soviet Union in the 1970s. Since the 1980s, roughly 50 percent of the CIA’s analytic workforce have had less than five years of experience. Small changes, like the creations of the Senior Analytic Service, allow long-serving analysts to maintain their position; however, these do not address what many consider to be a systematic deficiency. Expertise in each regional and functional area is a bit different. An analyst can be sent to Hong Kong or the National University of Singapore for a graduate degree and write a relevant China thesis, but nonproliferation and counterintelligence might require a more tailored, government-centric approach. Expertise can be built so long as analysts can be taken off the line and given the opportunity for new experiences and room to think.

Third, tradecraft does not make data or analysis any timelier or relevant for the people who will use it. The rhythms and tempo of policymaking are different than intelligence. Decision-making may have a bureaucratic element, like modern intelligence, but, at the senior-most levels, decision-making is personalized. The bureaucratic-personal mismatch between intelligence services and the decision-makers they serve is one of the fundamental challenges to conducting intelligence work successfully. As the Chinese military textbook, The Science of Military Intelligence, pointedly states, “If intelligence is not timely or relevant, then it is not intelligence.”

Intelligence historian Christopher Andrew suggested in his landmark study of U.S. presidents’ use of intelligence, For the President’s Eyes Only, that only three modern presidents had any flair for intelligence: Dwight D. Eisenhower, John F. Kennedy, and George H.W. Bush. Other scholars have observed in their studies of the National Security Council staff system that the first and the last ran the most disciplined, measured policy processes. The overlap between procedural discipline and effective use of intelligence is not a coincidence. Consistent and considered policymaking allows intelligence leaders opportunities to support the process. Kennedy ran his administration with an admirable thirst for information, pressing the intelligence community for ever more substance on the issues of the day. Disciplined management of the policy process or an all-consuming maw make intelligence a useful adjunct to decision-making.

If the tradecraft of intelligence analysis really had a profound effect on credibility and analysts’ ability to be “in the room,” then what has happened in the Obama administration? Never has more effort been expended on analytic tradecraft, and never has such tradecraft been at such a high standard by the technical measures now used. Yet, on a number of major issues, this administration appears to have ignored intelligence analysis only to later to blame the intelligence community for not providing adequate support. This administration has come under fire repeatedly from former cabinet officials, officials serving in the policy departments, and informed observers for a sloppy NSC policy process running from crisis to crisis.

For all of these reasons, analytic tradecraft may be necessary but not sufficient for intelligence analysis to maintain its relevance in a rapidly-changing world. After ensuring a few basic standards of analysis are observed, the value of emphasizing analytic tradecraft and devoting time and resources to its improvement falls dramatically. Bigger issues such as the management of intelligence and discipline in the policy process help intelligence officials support decision-makers with relevant intelligence support in a timely manner. Credibility also comes from expertise, and method is only a small part of what makes someone an expert.

Peter Mattis is a Fellow in the China Program at The Jamestown Foundation and author ofAnalyzing the Chinese Military (2015). He is currently completing two book manuscripts on Chinese intelligence operations.

No comments: