Book Summary: Psychology of Intelligence Analysis

Book: Psychology of Intelligence Analysis

Author: Richards J. Heuer, Jr.

Key takeaways: It’s an excellent book, and even more noteworthy given its origins—the CIA. It’s republishing (with some editing) of various articles written internally at the CIA Directorate of Intelligence between 1978 – 86. It’s a book aimed at intelligence analysts, who always have much much more murky information vs. that given to us, the financial analysts, even discounting the fact that intelligence analysts are often fed data manufactured specifically to deceive them (which some of us in the financial market could say: oh yes, I can relate to that; but you know you’re just bragging). We have it easier, no doubt. My summary below. As an aside, you can read this book for free at https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/PsychofIntelNew.pdf .

 

HEUER’S CENTRAL IDEA

The mind is poorly wired to deal with uncertainty (both inherent, and purposefully induced by other participants). Furthermore, even increased awareness of these shortcomings (biases/prejudices etc.) does little by itself to help one deal with the uncertainty (just as knowledge of the literal “blind spot” in our vision, which is filled by biases in our brain, does nothing to dilute our conviction: “I know it was dark, but I am certain it was him; I saw it with my own eyes”). It’s hard to figure out where our brains go “wrong” as it’s hard to be both the subject and the object, and our mental models can’t cope with the complexity of the world (the concept of bounded rationality). That said, there are some tools that can help us improve our analysis of uncertain complex issues, and that’s what this book is about.

 

WHY CAN’T WE SEE WHAT’S THERE TO BE SEEN

Heuer says that for an analyst to understand, say, China, one does not merely need more information about China, but one also needs to examine the mental models, mind-sets, biases and analytical assumptions—the lenses through which we learn about China. Mental models and mind-sets are important because they give us “shortcuts” to analyze a complex problem (and we know our brains like taking shortcuts), but we need to remind ourselves as much as possible that we are indeed taking shortcuts.

It takes more information, and more unambiguous information, to recognize an unexpected phenomenon vs. an expected one.

 

TOOLS FOR THINKING

Despite all the bad news above, there are some tools that can help us improve our analysis of uncertain complex issues, and they come down to three things:

  • Structuring information so that assumptions are clearly delineated
  • Expose competing hypotheses (assume it has happened, and walk backward to figure out how it could have, and explore those paths/probabilities)
  • Specify the degree and source of uncertainty involved in the conclusion

Analogies: Policymakers (read PMs) often perceive problems in terms of analogies with the past, but they can use history badly. When presenting information to policy-makers (again, read PMs), it is analysts’ job to “analyze rather than analogize” and gather enough history to present the right historical analogue.

Competing Hypotheses: In reviewing literature of intelligence successes, Frank Stetch found many examples of successes but only three where sufficient methodological details were provided: WW-II efforts to analyze German propaganda, predict German submarine movement, and estimate capabilities of the Luftwaffe. According to Stetch, in all of these very successful efforts, the analysts employed procedures that formulated and tested multiple competing hypotheses. Personal note: pre-mortem exercise.

Look to Reject Hypotheses: The scientific method is based on the principle of rejecting hypotheses, while tentatively accepting only hypotheses that cannot be refuted. One also have to be careful to make a distinction between disproved and unproved hypotheses.

Medical Diagnosis, NOT Mosaic Theory: Heuer says that the mosaic analogy implies that accurate estimates depend on having all the pieces, which implies accurate and complete individual pieces of information (I don’t know if I would necessarily assert this, but Heuer does). He then says that intelligence analysts do not work this way: they commonly find pieces that could fit many different mosaics, and due to availability bias/luck/other things, an imperfect image appears first, and then the analysts look to find the pieces that would fit this picture.

He further says that the more accurate analogy is medical diagnosis where the doc looks at the various indicators and based on his knowledge comes up with multiple plausible hypotheses of what could be wrong, and then orders tests to disprove some of these hypotheses, until (s)he can identify the appropriate cause.

Talk Out Loud: Written and spoken language activate different neurons, and it sometimes helps to talk something out loud (even to yourself in a closed room) to clear the cobwebs and simplify the complexity.

Analyze Sensitivity of Conclusion to a Few Critical Items: Analysts often over-estimate the factors they use to come to a conclusion, and when it comes down to it, it’s often a select few critical items. With this realization, and with the conclusion in hand, analysts need to re-ascertain the few critical lynchpins and how reliable they are.

 

 

Other Random Interesting Tidbits From The Book

  1. When faced with a major paradigm shift, the analysts who know the most about the subject have the most to unlearn (e.g. the German experts consistently under-estimated what the reunification of Germany meant)
  2. During the Arab-Israeli War of ’73, analysts who were often proceeding on the basis of the day’s take, comparing it to the material received the day before, produced intelligence that was perceptive, BUT not a systemic consideration based on the accumulated body of evidence.
  3. Often, newly acquired information is evaluated and processed through the existing analytical model, rather than being used to reassess the premise of the model itself (e.g. the 1998 Pokharan nuclear tests done by India, where CIA assumed that BJP’s campaign promise of a nuclear test was just that: a campaign promise).
  4. How to communicate uncertainity:

 

On 7/3/18, Andrew and Michael Mauboussin published a related article in HBR, titled “If You Say Something Is “Likely,” How Likely Do People Think It Is?“. See the article at https://hbr.org/2018/07/if-you-say-something-is-likely-how-likely-do-people-think-it-is. I will include the image here, but please go there to read the whole article.

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.