Is your boss a black bear?

To answer this question (and determine if you should change your telework status to permanent), consider some problems and processes in intelligence analysis.

By The NSS staff | July 26, 2021

Bear in Suit Opt
Is this your boss? Los Alamos National Laboratory

Is Japan planning a sneak attack on the American fleet stationed at Pearl Harbor? Are terrorists planning to fly commercial airliners into the World Trade Center buildings? Unthinkable possibilities need to be considered, not because they are particularly likely, but because of their incredible potential to change the world. Policymakers may pose questions like these to intelligence analysts, who must navigate a minefield of cognitive biases to provide objective, timely, well-reasoned, and well-sourced answers.

The analysts pursuing these answers work within 18 U.S. government organizations, each with unique specialties. One of these organizations is the Department of Energy’s Office of Intelligence and Counterintelligence, whose mission is to “protect, enable, and represent the vast scientific brain trust resident in DOE’s laboratories [including Los Alamos] and plants.”

Availability bias

Imagine for a moment that you are an intelligence analyst asked to evaluate the likelihood of a low-probability, high-impact scenario: Your boss is a black bear masquerading as a human. Where do you start?

You rack your brain for any shred of evidence that could support such a ludicrous claim. Bears are intelligent, inquisitive, and generally peaceful. So is your boss.

Then there’s the pungent smell of salmon in the break room. Vivid, easily recalled details like these can be a source of bias, figuring prominently in memory because of their recency and our tendency to think easily recalled information must be important. When evaluating a problem using hundreds of sources, intelligence analysts should consider whether a report is important because they remember it, or they remember it because it is important. Perhaps it’s better to set aside anecdotal evidence and try another, more mathematical approach.

Illusory correlation and misinterpreting randomness

From November through March, your boss was absent from the office. November through March coincides with bear hibernation season. In a low information environment, correlations might appear significant when in fact no relationship actually exists. Intelligence analysts need to be alert to patterns that might actually be the result of random chance. Flipping a coin eight times, which sequence is more likely to occur?

Heads Tails

Counterintuitively, these sequences are equally likely—eight heads in a row is no oddity compared to any other eight flip sequence—yet the illusion persists that somehow the streak must mean something. Psychologist Daniel Kahneman explains in his book Thinking Fast and Slow: “We are pattern seekers, believers in a coherent world …We do not expect to see regularity produced by a random process, and when we detect what appears to be a rule, we quickly reject the idea that the process is truly random.” Extend this idea to an intelligence problem where the coin flips represent some chance posture or characteristic of adversary forces, and it is easy to see how an analyst could incorrectly extrapolate from meaningless data.

Anchoring

When available information is poor, judgments should stay close to the base rate, or prior probability, without taking into account other evidence. Perhaps we could establish the base rate of black bears masquerading as management. To do that, we need a few key details. Say you work at Los Alamos National Laboratory in New Mexico. Is the number of black bears in New Mexico higher or lower than 800? What is your best guess for how many black bears live in New Mexico? (You will get more out of this if you hazard a guess.)

The answer is actually around 8,000, but experimental evidence suggests that your guess was probably closer to 800 due to an anchoring bias. In a low information environment, a tendency to “anchor” to a given number drives quantitative estimates higher or lower depending on an initially presented value. Interestingly, this bias persists even when the initial anchor is obviously uninformed and therefore irrelevant. You can imagine the implication for analysts attempting to estimate quantities of troops, missiles, or anything else.

Base-rate fallacy

Say that you establish with certainty that 15 percent of management are bears. What next? Naturally, it is time to deploy your bear detector, which boasts 80 percent accuracy at detecting managerial bears. After surreptitiously using it on your boss, it shows bear! Given this test and your estimate of the base rate, what is the probability that your boss is a black bear, assuming you are right about your detector’s accuracy? Experiments have shown that people are generally awful at answering this question, guessing somewhere around 80 percent, often ignoring the base rate (15 percent) altogether in their reasoning.

The best use of information in a situation like this is to combine the test result and prior probability taking into account false positives, which gives you 41 percent chance bear. Therefore, using the prescribed terms, you assess that it is “unlikely” that your boss is a black bear.

Bear Graphic
Table adapted from Intelligence Community Directive 203--the official, publicly available document that governs the production and evaluation of intelligence products.

Still, it cannot hurt to take precautions. Especially after a long time out of the office, you should calmly identify yourself as human. Do not surprise your manager with shouts or loud noises because such actions could provoke an attack.

Analysis of alternatives

U.S. law dictates that intelligence analysts reconsider their positions through an explicitly stated analysis of alternatives. This kind of analysis explores alternative interpretations of the available evidence and demonstrates to the reader that the analyst considered possibilities other than the main analytic position. The text identifies any evidence consistent with the alternative theory, and explains why the analyst ultimately deemed the scenario less likely. It also identifies any indicators that, if observed, may cause the alternative interpretation to displace the main assessment.

In this case, you would be wise to include the alternative that your boss is almost certainly human (note the different language from “unlikely a bear”). Perhaps it would be worthwhile to note that your main assessment is likely the product of months of isolation and deranged bear scholarship.

Layering

Layering describes when new assessments cite other, finished intelligence assessments instead of the undergirding primary sources, potentially propagating erroneous conclusions based on one source or analysis that no one thought to question. The commission tasked with analyzing the intelligence community’s work on the Iraqi weapons of mass destruction (WMD) issue described how “previous assessments based on uncertain information formed, through repetition, a relatively unquestioned baseline for the analysis in the pre-war assessments.” Layering creates the illusion of multiple corroborating analyses and can inflate confidence in a given judgment. If the original source or analysis turns out to be wrong or misleading, an entire assessment can come crumbling down.

To avoid this, you decide to go directly to the source, asking your boss to review your findings. Feedback is immediate: “No one approved this. Your research question and findings are absurd. Clean out your desk.”

Exactly what a bear would say. The WMD commission warned about situations like this, stating that “An intellectual culture or atmosphere in which certain ideas were simply too ‘unrespectable’ and out of sync with prevailing policy and analytic perspectives pervaded the Intelligence Community.”

Conclusion

Plenty of evidence not presented in this article would confirm that your boss is not a black bear. Unfortunately, no one alerted you to a basic precept of intelligence analysis: Identify potential hypotheses and proceed by elimination—often it is the hypothesis burdened with the least contra-indicators that is correct. Further, according to Kahneman, we are all susceptible to a “what you see is all there is” bias that can lead to overconfidence in seemingly well-supported assessments, but those assessments are wrong because of a critical, overlooked piece of information. Analysts often cannot immediately know what they do not know, and it often falls to reviewers to help identify these unknown unknowns.

The intelligence analyst blends the methods of scientist and historian, the former who eliminates hypotheses by seeking disconfirmatory evidence, the latter who uses incomplete information to construct a coherent narrative. Intelligence assessments are by definition subjective best guesses, and occasional inaccurate calls are part oft the job. As long as analysts earnestly characterize their sources, forthrightly express any uncertainties, explicitly state their assumptions, and explore alternative interpretations of the data, then their product is sound.

To report bears on Laboratory property, email bears@lanl.gov.

Content in this article was adapted from books by Daniel Kahneman (Thinking Fast and Slow) and Richard Heuer (The Psychology of Intelligence Analysis). The article also references the unclassified Report of the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction from March 2005. The “bear detector” Bayesian inference problem is a modified version of the blue and green taxicab question used by Kahneman and Tversky in their 1972 paper “On Prediction and Judgment.”