85 pages • 2 hours read
Malcolm GladwellA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more. For select classroom titles, we also provide Teaching Guides with discussion and quiz questions to prompt student engagement.
Summary
Chapter Summaries & Analyses
Key Figures
Themes
Index of Terms
Important Quotes
Essay Topics
Tools
Dignified, good-looking, deep-voiced, and affable, Warren Harding resembled a Roman Senator. Though not known for intelligence, Harding was a much-admired Ohio senator who radiated presidential dignity and common sense; Harding was elected president in 1920 and “[h]e was, most historians agree, one of the worst presidents in American history" (129).
Harding appealed to stereotypes of what a good leader looks like, which encourages voters to reach quick conclusions based on biases. This type of error lies behind much prejudging and social discrimination.
Implicit-association tests measure the effects of bias. One such exam asks test takers to place a series of words and pictures into one of two categories, “European American or Bad” and “African American or Good" (141). The items are Hurt, Evil, Glorious, a picture of a black man, a picture of a white man, and Wonderful. The test is repeated, except the categories are reversed: “European American or Good” and “African American or Bad" (141). Most people, black or white, are much faster at sorting the second test, where “European American” is associated with “Good.”
It turns out that “more than 80 percent of all those who have ever taken the test end up having pro-white associations, meaning that it takes them measurably longer to complete answers when they are required to put good words into the ‘Black’ category" (142-43). This also includes about half of all African Americans who have taken the test.
There’s a big difference between stated values and the unconscious associations that appear before evaluating them. This type of bias is hard to avoid. One Harvard psychologist notes, “All around you, that [white] group is being paired with good things. You open the newspaper and you turn on the television, and you can’t escape it" (145).
Biases have real-world effects. In the US, less than 15% of men are 6 feet or taller, but over half of Fortune 500 CEOs are that tall. Of the leaders, only 10 are shorter than 5 foot 6 inches.
Bob Golomb was a leading car salesman in central New Jersey. He believes salespeople must not prejudge a customer but give each the benefit of the doubt. Nonetheless, in a test of bias in car sales, volunteers—both black and white—walked into dealerships, represented themselves as college-educated systems analysts from wealthy neighborhoods, and bargained for a low-priced car. Black people and women wound up with offers up to $800 more expensive than offers to white males.
Sales personnel may have an instinctive bias against black people and women as “lay-downs" (165) or suckers that persists despite all evidence to the contrary. They behave like 1920 voters who fell for Warren Harding without really thinking.
How can bias be fixed? With respect to prejudice against minority groups, it’s possible to overcome awkward responses if “[you] change your life so that you are exposed to minorities on a regular basis and become comfortable with them and familiar with the best of their culture” (166).
In 2000, planners of the Millennium Challenge war games approached Paul Van Riper—a career Marine commander whose relentless tactics during the Vietnam War gave him notoriety. They wanted him to play a rogue Middle East military leader who threatened American forces.
Van Riper’s “Red Team” was to face the US “Blue Team," who had access to advanced computer systems, simulations, joint interactive planning processes, enormous databases, “a formal decision-making tool that broke the enemy down into a series of systems—military, economic, social, political" (179). Theoretically, this was to enable careful thinking to control the progress of war.
Van Riper, on the other hand, believed “that war was inherently unpredictable and messy and nonlinear" (182) and that calm rationality doesn’t work in the heat of battle.
In the simulation, Blue Team sails an armada into the Persian Gulf, issues ultimatums, and knocks out Red Team’s communication antennae. Red Team responds by using couriers to communicate, then sends out small boats to monitor the Blue Team’s ships, and finally launches a surprise attack, firing cruise missiles that overwhelm the armada and sink 16 vessels. In a real war, 20,000 soldiers and sailors would have died. Riper’s in-the-moment decisions beat the US military’s carefully thought-out logical plans.
In another example, improvisational comedy groups take topic suggestions from the audience and immediately perform skits that they make up on the spot; in other words, this form of comedy “involves people making very sophisticated decisions on the spur of the moment, without the benefit of any kind of script or plot" (193). Yet improv is not simply random and has rules that require hours of practice. The main rule is for the actors to accept everything that happens onstage; though this may seem counterintuitive, it results in gifted performances.
Similarly, Van Riper gave his Red Team plenty of leeway to innovate in the field, and didn't hold long meetings where his teammates explain their thinking. Instead of clarifying, explanations—“verbal overshadowing”—interfere with the ability to see a situation clearly. It’s easy, for example, to recall a face but hard to describe it and the act of describing interferes with the memory of the face, thereby displacing it.
Trying to think through a fast-moving problem not only doesn’t work well, it interferes with the mind’s ability to generate sudden, useful insights. In the case of Van Riper’s victory, the Blue Team was so absorbed in the minutiae of battle that it failed to creatively view the situation as a whole.
In 1995, struggling with a shoestring budget, the emergency department at Chicago’s Cook County Hospital was overcrowded with indigent patients, many who complained of chest pain but only 10% of whom were actually having a heart attack. Medicine department chairman Brendan Reilly found that doctors’ evaluations of patients’ heart risks are all over the map; even when physicians administer extra tests and otherwise gather more data, their conclusions are essentially random.
Reilly used cardiologist Lee Goldman’s new decision-making algorithm that combined the readout from an electrocardiogram with three urgent-risk factors: “(1) Is the pain felt by the patient unstable angina? (2) Is there fluid in the patient’s lungs? and (3) Is the patient’s systolic blood pressure below 100?" (230) The results are 70% better at detecting which patients are not suffering a heart attack. Cook County was one of the first hospitals to benefit from the Goldman algorithm.
Both the Blue Team defeat and the Cook County Hospital experience demonstrate that more information can actually make a situation worse. Psychologists who receive more data about a troubled individual become more certain of their evaluations, while their diagnostic accuracy remains the same.
When people suffer from information overload, they tend to avoid making decisions. A store’s tasting booth with six types of jams generated sales from 30% of visitors, but when the booth displayed 24 different jams, sales dropped to 3%.
After the Blue Team defeat, the US military rewrote the rules so Red Team was unable to use any of the tactics it had employed. Blue Team won in a rout, and the Pentagon believed its new system worked just fine.
Chapters 3 and 4 focus on how intuitive thinking can go wrong. The Warren Harding error is Gladwell’s name for snap decisions based on bias. Harding looked presidential, so he must be presidential—even though he was not. This error occurs everywhere—in assessments of politicians, minorities, celebrities, potential employees, and people who instinctively earn love or hate.
One of the benefits of snap decisions is that they economize and simplify daily problems. Instead of poring through endless reams of data on an individual, one may take a single look and form a quick judgment. These assessments can, however, become outdated. As urban societies grow more egalitarian, previous human biases against outsiders—suspicions that once protected tribes from danger—become obsolete, interfering with human ability to get along in crowded, diverse cities.
Humans react to one other by signaling—the symbols of achievement that people present. A person with a bachelor’s degree may be presumed to be somehow better than a person without one. In this case, the degree is used as a simple filter—a rule of thumb that indicates who is smart and knowledgeable and who isn’t, even though this sort of thinking can lead to Warren Harding errors. A good-looking person or someone with a sparkling personality may be presumed to be better or nicer than someone without those traits, much like a wealthy person is presumed to be happy and lucky—even when they aren't.
Thinking things through is a process that makes use of deductive logic—the same kind that figures out arithmetic problems or winnows out essential options in buying a house or a car. The other mental process involves inductive reasoning—essentially, seeing patterns in masses of data—and this can happen in an instant, especially if one possesses a great deal of experience with the issue at hand.
Thinking carefully is somewhat like learning a topic for the first time, whereas acting on a problem quickly and effectively is akin to using a well-trained skill set: You already know how to do it, and you apply your acquired abilities to the problem in much the way you’ve done it many times before.
Blink principally deals with the inductive process. The deductive techniques used by the Blue Team ended up being processes that failed in a contest with the quick, intuitive grace of Red Team’s thinking. The failure in this case wasn’t that the Blue Team used fast thinking improperly; it was that they didn't use it and focused instead on being analytical. They were dedicated to the imposing logic of their data-gathering and evaluation systems, but while Blue Team poured over spreadsheets, Red Team sunk their ships.
A related form of failure is by-the-book reasoning. It’s one thing to obey the rules; it’s quite another to do so mechanically, heedless of special cases where human judgment is needed. The Blue Team, carefully following their rulebook, had no answer to the fast-moving enemy; the Red Team, doing what they shouldn’t, championed the day.
By Malcolm Gladwell