25 October 2015

IS U.S. INTELLIGENCE ANALYSIS AS GOOD AS IT GETS?

OCTOBER 23, 2015

One of the more charming and frustrating aspects of American life is the endless pursuit of perfection. We tend to believe, as a people, that things can always be improved. For many aspects of life ­— science, medicine, transportation safety, etc. — this is a worthwhile approach. But for other aspects of life this pursuit is really a chimera; an illusory, unattainable goal. Indeed, pursuing such improvements may be even more costly than not pursuing them at all.

One of those areas where we should perhaps step back from the endless quest for perfection is intelligence analysis. Note that we said “perhaps.” We believe the issue is open to debate and is a debate worth having.

Since 2001, the intelligence community has been pilloried repeatedly for its “failed analysis.” Critics point to the 9/11 attacks, the absence of weapons of mass destruction in Iraq, and the unpredicted Arab Spring. We do not deny that these were analytic failures, but we also believe that it is important to look at the larger record of intelligence analysis and ask some fundamental questions: Is this as good as intelligence analysis gets? And, if so, is it good enough?


Congress and the intelligence community have tried to make “analytical improvements.” They spent money on new training programs for analysts. Intelligence agencies restructured analytical cells and put greater scrutiny on their work. The intelligence community and Congress set up advanced research facilities and reached out to the private sector for the “best practices” in analytical models and tools to sort out the “big data” facing anyone who tries to understand the world better. Finally, both the Office of the Director of National Intelligence and the Central Intelligence Agency created analytical standards as a guide to improve analysis.

The obvious question to ask after all of these efforts is: Has intelligence analysis improved? This is, admittedly, a very difficult question to answer. What are the standards by which one would make such an assessment?

Two prominent standards come to mind: accuracy and utility.
By accuracy, we mean: “Was the intelligence correct?” This seems to be a rather stark “yes or no” question yet in reality the answer is not always clear. For much of the intelligence analysis written every day, we may not know for long periods of time — even years — if it was correct. Very little in intelligence is a specific point prediction. A great deal of it is about longer-term decisions and trends.
By utility, we mean: “Did the policymakers find it useful?” Again, this is a more difficult question to answer than it seems. Policymakers have varying standards for judging the utility of intelligence analysis depending on their own preferences, their tolerance for ambiguity, and their understanding of what intelligence is supposed to do and not do (especially the strict avoidance of making policy recommendations).

What is the Record?

As difficult as these questions may be to answer, efforts have been made to do so. In 2003, former Deputy Director of Central Intelligence Richard Kerr conducted a hard look at intelligence analysis over the history of the intelligence community in light of both the 9/11 attacks and the Iraq WMD estimate. In the final series of reports, it became clear that, on average, U.S. intelligence called events correctly about three out of four times. But Kerr also concluded that the best way to judge the effectiveness of intelligence analysis was not on a call-by-call basis but by what utility it had for the policymaker. He concluded that the intelligence community did a good job at keeping a very disparate set of policymakers reasonably well informed over 50 years, helping them thread their way through crises and avoid many major crises as well. Moreover, we are talking about the Cold War, which had a few peak crises but never erupted into war between the United States and the Soviet Union. Kerr asserts that intelligence helped achieve that record and we agree. Kerr’s standards are quite useful and, as he points out, the record of U.S. intelligence is much better than many would have us believe.

As Good as it Gets

If our supposition is correct, that intelligence analysis may be as good as it can be expected to be, there are a number of implications.

First and foremost, we will have to accept the fact that sometimes intelligence analysis is wrong and stop flaying U.S. intelligence for every “missed call.” Intelligence analysis is an intellectual process prone to all sorts of challenges. Intelligence managers and analysts are fairly well versed in the intellectual pitfalls waiting for them, such as hindsight bias or mirror imaging, to name but two. But we should recognize the fact that this is an intellectual process and that it is prone to error. Indeed, if we want intelligence analysts to take intellectual leaps and to think interesting thoughts, then we have to give them the right to be wrong some of the time. That is the price for “interesting thoughts.” We can assure our readers that intelligence analysts take it very badly when they do get it wrong, but they also appreciate their own fallibility, perhaps more than their readers do.

Second, the intelligence community has to create a robust “lessons learned” capability. There may not always be remedies but for serious analytic lapses there should be an equally serious intellectual (as opposed to political) inquiry as to what went wrong. The problem may be flawed analysis, but it may also be that the answer was unknowable. If the intelligence community is going to be held to more reasonable analytic standards it should show its good faith by implementing a more robust lessons learned capability.

Third, Congress will have to get on board with this concept and forgo further pointless attempts to legislate good intelligence analysis as they did in the Intelligence Reform and Terrorism Prevention Act (IRTPA) of 2004. Analytical standards written into legislation may be satisfying but they will have no effect on the analysts. This legislation also begs the question: Are analysts to be punished for failing to follow these standards? After all, they are written into law. And, if so, who makes that decision?

Fourth, assuming Congress agrees to be less adamant about the need for complete analytic accuracy, they should share in the responsibility of educating the public about the realities of intelligence, namely that we live in a complex and dangerous world and not all threats can be foreseen, nor can all bad things be prevented.

All of this would require a profound political shift in how we manage and oversee intelligence. But there would also be some equally large benefits.

Most importantly, accepting more reasonable standards for analysis depoliticizes intelligence by ending its status as the political punching bag of convenience whenever something bad or unexpected happens.

Second, it would restore confidence to intelligence analysts, too many of whom fear being pilloried lest they make a mistake. Again, we simply cannot ask analysts to take risks and think out of the box if we also expect them to be correct all of the time.

Third, we would undoubtedly save a good deal of time and money on the many studies, machines, failed analytic tools, and intellectual “flavors of the month” that we encounter in the endless pursuit of analytic perfection. To use but one example, crowd sourcing may produce interesting consensus results but it does not produce what policymakers demand — expertise and knowledge delivered on time and in a way that policymakers can easily make sense of and use.

The Final Analysis

We cannot stress enough the importance of recognizing and understanding that intelligence analysis is an intellectual activity, not a mechanical one where the proper formula or recipe will produce the preferred outcome each time. Although we appreciate the possibility that some analytic tools and information technology solutions will assist analysts, these tools cannot change the core substance of intelligence analysis: the ability to read, think, and write critically. And that, once again, is subject to fallibility.

One of the hallmarks of intelligence over the last decade has been “reform fatigue.” No one in U.S. intelligence or Congress really wants to look at this again — at least for a while. But we believe that a serious look at the standards for intelligence analysis, accompanied by a mature political discussion, might do more to improve intelligence analysis than any of the false starts, endless studies, and failed tools that we have seen to date.

But, in the final analysis, the naysayers need to understand that the intelligence community engages in analysis — not clairvoyance. Our intelligence analysis may be as good as it gets. We can never know this definitively but we believe it is a discussion worth having.

Mark Lowenthal was the Assistant Director of Central Intelligence for Analysis & Production and staff director of the House Intelligence Committee. He is also President of the Intelligence & Security Academy and an adjunct professor at Johns Hopkins University’s Krieger School of Arts and Sciences, Advanced Academic Programs in Washington, DC in the Global Security Studies andIntelligence programs.

Ronald Marks served as a CIA officer and Intelligence Counsel to Senate Majority Leaders Bob Dole and Trent Lott. He is a member of the Board of Directors at the George Washington University’s Center for Cyber & Homeland Security.

No comments: