logo

57 pages 1 hour read

Daniel Kahneman

Thinking, Fast and Slow

Nonfiction | Book | Adult | Published in 2011

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 2, Chapters 10-18Chapter Summaries & Analyses

Part 2: “Heuristics and Biases”

Chapter 10 Summary: “The Law of Small Numbers”

System 2’s operations are, at least in many instances, dependent on the information that System 1 generates through its associative processes for System 2. This creates a likelihood of error regarding “merely statistical” information—information in which the rules of statistics control the outcome rather than the type of causation that System 1 excels at identifying (whether accurate or not).

Kahneman explains that we can predict statistical (or mathematical) facts, such as the percentage of random draws from a jar containing several types of marbles that will produce a specific array, but doing so lacks the sense of causation that System 1 relies on. The draws from the jar will yield predictable results only if they are carried on long enough to reach the statistical averages. Just a few draws will not yield the mathematically predictable result but will instead create a skewed impression because of insufficient sampling.

This sampling effect has routinely led to unreliable results in psychological experiments because the researchers have frequently relied on their System 1 intuition and, therefore, selected samples that were too small. This reveals the shortcomings of System 1’s intuition when dealing with statistical data. System 1 frequently produces results that are overconfident, generally through WYSIATI. It also tends to assign causation to random events, assuming some causal connection must explain results. Thus, we tend to place unwarranted faith in results obtained from small samples.

Chapter 11 Summary: “Anchors”

Kahneman explains that the “anchoring effect” is “one of the most reliable and robust results of experimental psychology” (120). Stated simply, the anchoring effect occurs when someone is considering a value to assign to an unknown quantity and, before doing so, a number is suggested to them. The resulting value assigned by the person consistently remains close to the suggested value. Notably, the suggested value may be presented as entirely irrelevant to the quantity to which the person is attempting to assign value. Any number stated before such an effort, for any reason, produces the effect.

There are two processes that produce the anchoring effect—one in System 1 and the other in System 2. Anchoring occurs via System 2 when a person intentionally begins at the anchoring value and then adjusts upward or downward as seems appropriate. People frequently stop when they are uncertain whether to continue adjusting, which is generally premature. However, anchoring also occurs as a priming effect. Anchoring occurs via System 1 because the original value introduced is a suggestion that System 1 attempts to prove true and, in so doing, generates a range of facts and options that are related to that original value. Unlike many other psychological phenomena, the anchoring effect is measurable. It is also a particularly important instance of the interplay between Systems 1 and 2.

Anchoring effects reflect WYSIATI in a fairly transparent manner. If the only available information is the anchor (the number suggested before you have to decide value), of course your decision on value will be made in reference to it. There is nothing else to use as a reference point—ergo, WYSIATI.

Chapter 12 Summary: “The Science of Availability”

In its simplest form, the availability heuristic holds that people will estimate the size of a class of things based on how easily they can retrieve examples of the class. Likewise, a person will remember instances of their own action more saliently than the same action or contribution of another. Thus, if spouses are asked separately to estimate the percentage of tidying up that they perform, the combined total is reliably more than 100%.

The availability heuristic illustrates the interplay between Systems 1 and 2, but in ways that are different from those at work in the anchoring effect. System 1 will call to mind the instances that affected the individual (such as their own tidying) more readily than others. Thus, their frequency is overestimated.

System 2 is more likely to challenge information provided to it and to question the information that it can access. System 1, however, will base its confidence on fluency—or familiarity and, thus, comfort level—with examples of information relevant to the issue. People make better-informed decisions when operating with System 2, and Kahneman relates research led by Norbert Schwarz in the 1990s that showed that circumstances in which people are less likely to operationalize System 2—and thus more likely to “go with the flow” of relying on availability heuristic reasoning of System 1—include engagement in another effortful task, being in a good mood or scoring low on a depression scale, and being (or being made to feel) powerful. In fact, “merely reminding people of a time when they had power increases their apparent trust in their own intuition” (135).

Chapter 13 Summary: “Availability, Emotion, and Risk”

In this brief chapter Kahneman highlights the implications of availability and the affect heuristic on perception and policy decisions involving risk. He notes that people frequently misjudge the likelihood of certain events, especially those that are highly unusual but emotionally compelling, and those that are truly mundane and far more common. Such errors are exacerbated by the media, which give greater attention to the more emotionally salient topics.

The chapter also describes the issues surrounding regulatory choices that divide power between the public and experts on matters of risk. It references Paul Slovic’s advocacy of a greater role for the public, who may judge risk differently than experts by making finer distinctions reflecting quality of life. In contrast, Kahneman observes that Cass Sunstein advances the idea of “availability cascades,” which are said to be instances of self-perpetuating concern over an issue that may or may not be a significant threat, as a reason to afford more authority to experts as a check on the shortcomings of judgment likely to affect the public.

Chapter 14 Summary: “Tom W’s Specialty”

Chapter 14 details an experiment that Kahneman devised with Tversky at the University of Oregon in the early 1970s. This three-part study asked subjects to assess Tom W, a fictitious graduate student. The first task provided subjects with a list of nine disciplines and asked them to rank how likely Tom was to specialize in each field, with 1 indicating most likely and 9 indicating least likely. The second task gave subjects a short personality assessment of Tom and asked them to rank the fields based on Tom’s similarity to the typical student of each field. The third task gave subjects all the above information but included that the assessment dated to Tom’s high school days and was based on projective tests; they were then asked to rank the likelihood Tom was a student in each field.

The results of this experiment demonstrated the strength of System 1. The subjects were more likely to assign Tom’s specialty based on the personality assessment than general statistics about the number of students in each discipline (this is known as the “base rate”). For example, enrollment in humanities programs is likely to be higher than enrollment in computer science or engineering.

Kahneman offers two rules for avoiding this type of error. The first is to ground reasoning about the probability of an outcome in a plausible base rate. The second is to question whether the evidence you have is actually helpful to determining the probability of the outcome. Otherwise, WYSIATI combines with associative coherence to create an intuitive response that may well be entirely groundless.

Chapter 15 Summary: “Linda: Less Is More”

“Linda” refers to what Kahneman describes as the most famous experiment he designed with Tversky. It was designed to show the role of heuristics in judgment and their illogical nature. It too uses representation evidence (i.e., similarity to stereotypes). Ultimately, the experiment became famous (and somewhat controversial) because Kahneman and Tversky took it to a wide range of audiences and trimmed it down to essentials to test the limits of its finding that representation was the basis for subjects’ answers despite clear violation of logic. The result kept holding.

In essence, the researchers gave a brief description of Linda and asked subjects to rank various activities she might be involved in. One of the answer choices indicated that Linda was a bank teller. Another indicated that Linda was a bank teller who was also a feminist. Subjects overwhelmingly chose the latter option as more likely than the former, including students with a strong statistical background, even though it is necessarily less probable that Linda would be both a bank teller and an active feminist than it is that she would be a bank teller. Certainly, some female bank tellers are not active feminists, so the class of bank tellers is broader than the class of bank tellers who are active feminists.

Kahneman and Tversky labeled the phenomenon the “conjunction fallacy” because people adopted responses that defied logic based on the addition of specific details. Such a specific answer is always less probable than a broader one. In that sense, less is more.

Chapter 16 Summary: “Causes Trump Statistics”

Kahneman next discusses stereotypes. He explains that, scientifically, stereotypes are neutral (unlike in policy discourse). System 1 thinks of categories in terms of a stereotypical example of the group, such as a horse.

From there, Chapter 16 highlights the tendency to value causation over statistical reasoning. Then, however, Kahneman explains that experimental research tends to show that students’ assessments of situations generally do not change from their control group condition with the addition of knowledge of psychology research, specifically citing an experiment in which only four of 15 subjects responded quickly to an apparent seizure victim, as each subject was in a separate booth and believed the others also heard. Kahneman notes the disheartening fact that students who knew the results of the experiment and those who did not both estimated that the two specific subjects they evaluated (about whom nothing relevant was provided) had most likely responded very quickly. Their assessment of behavior had not changed.

Chapter 17 Summary: “Regression to the Mean”

Kahneman begins Chapter 17 with a brief story about his visiting position with flight instructors. They noticed that rewards to high performers were generally followed by deteriorated performance, whereas punishment of poor performers generally yielded improved performance. While the instructors assumed causation, Kahneman saw regression to the mean.

In any group, there will be random fluctuations in performance. This means that outliers, in whatever direction, in one performance will tend to revert toward the group’s average performance level in the next iteration.

This statistical principle applies in many domains. Kahneman notes, “Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty” (176).

Further, when performance is further from the mean, the following performance is likely to regress closer (to approach the mean). Although we consistently seek to identify causal explanations for the change, no such causation necessarily exists. Instead, the exceptional outcome may (and probably is) simply a matter of luck, and the subsequent regression is easily predicted by statistics.

The oddity of regression to the human mind, Kahneman explains, may be why it was not understood until the late 19th century. It remains contrary to the intuitive causation judgments made by System 1 and, therefore, tends to be outside most people’s awareness in relevant matters.

Chapter 18 Summary: “Taming Intuitive Predictions”

Regression to the mean partially explains why some predictions are wrong. Most people, if asked for a prediction, actually substitute (via System 1) the easier question of what existing evidence shows and provide that result as their answer. Thus, they fail to account for the inevitable regression to the mean of exceptional performances.

Kahneman also provides guidance on how to improve the accuracy of predictions. Using the example of a precious child who obtains an exceptional GPA in one instance, he outlines four steps. First, obtain the average of the relevant measure (in his example, the average GPA). Then, determine your intuitive prediction (i.e., that which matches existing evidence). Next, estimate the relative importance of any special traits (precocity in Kahneman’s example) as a percentage expressing the correlation between the trait and the measure (Kahneman uses 30%). Finally, if the deviation of the prediction is more than that correlative figure (i.e., 30% in Kahneman’s example) from the mean, move your prediction that amount (i.e., 30% in the example) toward the mean to arrive at a prediction that is likely more accurate and accounts for regression.

The intuitive prediction is a System 1 task. Accounting for regression is the work of System 2. Kahneman notes there may be reasons to maintain extreme predictions, but also that predictions tempered by System 2 are generally more reasonable.

Part 2, Chapters 10-18 Analysis

Part 2 provides nine chapters’ worth of illustrations and examples of the types of errors that affect human judgment. It draws on Part 1’s discussion of the two systems, and the concepts it develops feed directly into the discussion of overconfidence in Part 3, as well as to subsequent discussions of prospect theory and human well-being in Parts 4 and 5.

The characteristics of the two systems presented in Part 1 are the mechanisms that cause the biases and heuristics discussed in Part 2. The book’s structure as a whole begins to take shape upon recognizing this connection.

Understanding the two-system explanation for the phenomena discussed in Part 2 enables the reader to see not only the specific examples of errors in judgment highlighted in these chapters. That foundation is also the basis for predicting when such errors will occur. This is precisely Kahneman’s purpose, as is evident in the clear instructions he repeatedly provides for avoiding these decisional traps and in his admonishment to use the information for just purposes.

Several of the matters described in Part 2, like the “less is more” processes explained in Chapter 15, become significant later in the book when Kahneman discusses broader topics. Others, such as the “anchors” discussed in Chapter 11, are likely to provide very practical tools to readers. Thus, the chapters of Part 2 stand alone in their utility and value, and they play a significant role in moving the discussion forward. Indeed, Chapter 18’s discussion of accounting for biases in predictions leads seamlessly into Part 3’s discussion of one of the most significant and common types of biases in human judgment: overconfidence.

blurred text
blurred text
blurred text
blurred text