Why We Make Mistakes (Joseph T. Hallinan)

Zadius Sky

The Living Force
Why We Make Mistakes: How We Look Without Seeing, Forget Things in Seconds, and Are All Pretty Sure We Are Way Above Average

From Amazon:

Amazon Book Description said:
We forget our passwords. We pay too much to go to the gym. We think we’d be happier if we lived in California (we wouldn’t), and we think we should stick with our first answer on tests (we shouldn’t). Why do we make mistakes? And could we do a little better?

We human beings have design flaws. Our eyes play tricks on us, our stories change in the retelling, and most of us are fairly sure we’re way above average. In Why We Make Mistakes, journalist Joseph T. Hallinan sets out to explore the captivating science of human error—how we think, see, remember, and forget, and how this sets us up for wholly irresistible mistakes.

In his quest to understand our imperfections, Hallinan delves into psychology, neuroscience, and economics, with forays into aviation, consumer behavior, geography, football, stock picking, and more. He discovers that some of the same qualities that make us efficient also make us error prone. We learn to move rapidly through the world, quickly recognizing patterns—but overlooking details. Which is why thirteen-year-old boys discover errors that NASA scientists miss—and why you can’t find the beer in your refrigerator.

Why We Make Mistakes is enlivened by real-life stories—of weathermen whose predictions are uncannily accurate and a witness who sent an innocent man to jail—and offers valuable advice, such as how to remember where you’ve hidden something important. You’ll learn why multitasking is a bad idea, why men make errors women don’t, and why most people think San Diego is west of Reno (it’s not).

Why We Make Mistakes will open your eyes to the reasons behind your mistakes—and have you vowing to do better the next time.

I have recently finished reading a small and easy-to-read book, Why We Make Mistakes (2009), to which the author, Joseph T. Hallinan (a winner of Pulitzer Price and a blogger at _http://whywemakemistakes.blogspot.com/) discussed the researches into why people act the way they do and why people easily make mistakes. He also discussed a number of bias, including hindsight, framing, multi-tasking, information overload, inattentional blindness, context, memory, etc. Most of all, he pointed out that over-confidence is our leading root cause for mistakes. Keep in mind that this book was published in 2009, before Kahneman (Thinking, Fast and Slow, 2011), Wilson (Redirect, 2011), and McRaney (You Are Not So Smart, 2012).

I don't know if anyone here ever read this book, since neither the author nor the book is mentioned on the forum. I've found this book to be rather interesting, being full of examples and studies (a few that are recognizable), and just one piece of the puzzle to the pile here, I think, in understanding our bias. However, there is a "nagging" feeling in me regarding the author's analysis - meaning that he was sharing a lot of studies and examples but very little analysis/conclusions to them. It is almost like he's saying, "here are the examples and studies about such-and-such, but draw your own conclusions."

Anyway, I'll break the book down with a few interesting things:

Chapter 1: We Look but Don't Always See

→ The author first shares a story about Burt Reynolds and the incident at the bar where he beat up an obscenity-bawling guy and not seeing that the guy has no legs. This story demonstrated the fact that we see with only a limited vision when there are clues in a plain sight.

→ A person's directional preference can be determined by handedness (right- or left-handed).

→ A "Quiet-Eye" Period (amount of time needed to accurately program motor responses) - the experts use longer quiet-eye period than the notices, using an example of a golfer.

→ "We See What We Are" - using a "door" experiment study where a student would be talking to a construction worker and someone else (almost a construction worker) carrying a door would interrupt the two people by going into between them. That third person would change places with the second person - but the student wouldn't even notice the change at all as he only "see" a "construction worker," not the person himself. This is a part of "skimming" bias where "we don't need to know who he is; we just need to know what he is" (p. 19).

→ "We're Built to Quit" - using an example of blind people learning to see, which became too hard for them, so they quit (one boy even threatened to claw out his eyes), and an example of volunteers to look at thousand of images and report if they see a tool but they would "quit early when the target is unlikely to be there."

Chapter 2: We All Search for Meaning

→ "Meaning Matters; Details Don't" - using a Penny experiment (this experiment is shown in above Amazon link) where we tend to forget the details and how we rarely spot a "real" penny.

→ "Names Don't Matter" - how it is easier to remember that a person is a baker than it is to recall that his last name is Baker.

→ How we easily forget our password (usually after a week) when they lacked meaning. Oh, this is so true. :rolleyes:


Chapter 3: We Connect the Dots
("The brain connects the dots we didn't know it was connecting" - p. 43)

→ Snap Judgment - using an example of June Siler's testimony/identification of her attacker in court and where she put the wrong man in jail after rejecting her initial gut feeling (when her brain was making the association between the hate she felt and the traits she perceived from the "attacker" in court). This is similar to Blink. But, the mistake here is resulted by her rejection of that "gut feeling."

→ "The Role of Regret" - the emotion of regret tends to color the decision-making with a general principle of feeling more responsible for our own actions than we do for our inactions, meaning we are more inclined to not doing anything, so we won't feel regret. This one is interesting because we would feel safer and relieved in being passive than being active where there is a high risk of feeling regret. The author also pointed out that regret can be a cause of memory bias.


Chapter 4: We Wear Rose-Colored Glasses
(We tend to recall our past words and past actions in more favorable views than what they really were.)

→ Here is the following brief excerpt that sums up a bit for this chapter:

page 59 said:
Indeed, the tendency to see and remember in self-serving ways is so ingrained - and so subtle - that, like many of other errors discussed in this book, we often have no idea we're doing it. The Princeton Nobel laureate Daniel Kahneman, in an interview, observed this some time ago.

"I mean, the thing that is absolutely the most striking is how seldom people change their minds," he said. "First, we're not aware of changing our minds even when we do change our minds. And most people, after they change their minds, reconstruct their past opinion - they believe they always thought that."

Believing that we always thought this or that might be harmless enough if such beliefs were confined to our past opinions. If we remember ourselves as being better than we actually were, for instance, so what? Maybe there will be some rolled eyes over the egg salad at the family reunion, but nothing more. But what about past facts? When people are really put under the spotlight - not to mention under oath - do their memories of their own actions still tend to be self-serving?

→ The most interesting, at least to me, is under "Hey, I Warned You" Principle section, where disclosing or exposing a bias doesn't cancel its effect when people is ignorant. A bias, as an example, where a doctor would write a patient a drug prescription at the expense of the patient (because that doctor is invested in a drug company), and when exposed, people just don't know what to do with the information, so they ignore it. I don't know if this is similar to people when presented with the Truths - that "they just don't know what to do with it, so they'd ignore it?"

Chapter 5: We Can Walk and Chew Gum - but Not Much Else

→ This chapter primarily focuses on distraction and how we believe multitasking is useful when it is really a myth. The author provides examples of the plane/car crashes due to distraction/multitasking. And, how multitasking can leads to forgetting and how there is no such thing as dividing attention between two conscious activities.

He pointed out:

79 said:
Indeed, the gains we think we make by multitasking are often illusory. That's because the brain slows down when it has to juggle tasks. We gain nothing, for instance, by ascending the stairs two steps at a time if the additional effort slows us down so much that we end up taking as long to climb them as we would if we had taken them just one step at a time. In essence, this is what often happens when we try to perform two mental tasks simultaneously.

→ Also, in this chapter, it is discussed about an "inattentional blindness" where one can look directly at something, but still not see it (with an example of a bus crash where the bus driver, while talking on his cell phone, couldn't see the bridge's overhead and crashed into it). This is a scary thought - we can never be sure if another person can see us when we're there. Especially on the road.

Chapter 6: We're in the Wrong Frame of Mind

→ The author discusses a framing bias, based on Kahneman's works with several examples. And, he also talked about a least obvious factor that affect the way we "frame" our decisions and that is time. Timing can affects our choices, for instances, about foods, clothes, and films. For example, regarding movies, the person can pick a "highbrow" movie for a later time when they pick a "lowbrow" film now.

Chapter 7: We Skim

This is an interesting chapter where we tend to skim things...a lot and without being aware of it:

→ How we pay attention only at the beginning of an article but "assume" the rest.

→ Experts can easily make a mistake on overlooking an error when a beginner can easily spot it. The author also pointed out that "the better we are at something, the more likely we are to skim" (p. 111). And, that is because of overconfidence.

→ When something is "familiar," we tend to notice less, not more - due to our "assumptions" about it.

→ Another example is a part of a context bias where people see a woman hanging from a tree around Halloween would "assume" that it's a Halloween decoration when in fact, that woman just killed herself. No one called the police until 14 hours later. At any other time, they'd call the police immediately.

Chapter 8: We Like Things Tidy

→ The author shared a good example about how we remember maps and how, when we remembered the maps, we "systematically distort them."

→ The next example is "The War of the Ghosts," which is fascinating because we tend to shorten our story-telling after hearing it for the first time. This reminds me of Homer's two epics and how the story-teller have to repeat certain themes or memes over time to express the truths. It also makes me wonder if these two epics were actually longer than they are now?

→ In relation to the above, if we tell a story, we would not be telling a story that is a version of the original event - it'll become an event as the way we remember it through omitted, exaggeration, or additional information.

→ Other thing is interesting is when a person tells a story (about anything), that person can be lying through an "impression management" where that person was trying to enhance positively the other person's impression about the person telling a story. It would have nothing to do with telling the truth/story, but more to do with creating an impression.

Chapter 9: Men Shoot First

→ This chapter deals with gender-differences in certain bias (how men are overconfident in intelligence, attractiveness, successes in wars, and the number of women they had sex with - and how men don't ask for directions).

Chapter 10: We All Think We're Above Average

→ This chapter focuses mainly on overconfidence - a seed of many mistakes.

page 149 said:
"Overconfidence is, we think, a very general feature of human psychology," says Stefano Della Vigna, a professor of economics at University of California, Berkeley. He has studied the ways overconfidence induces us to commit everyday errors, from signing up for gym memberships we won't use, to buying time-shares in a condominium (which we also won't use, at least as much as we think we will), to falling for teaser-rate offers on credit cards (which we will use far too much). And his research has led him to a general conclusion: "Almost everyone is overconfident - except the people who are depressed, and they tend to be realists."

The last bit reminds me of this part in this thread: Anger: 6 Psychological Benefits of Getting Mad

That's the wonder of human emotions: happy isn't always good and angry isn't always bad (although it may feel that way). An unhappy person is also more likely to spot mistakes and an angry person is highly motivated to act. We need reminding that even scary and dangerous emotions have their upsides, as long as they are used for the correct purpose.

Does that mean, in order to be realists, we have to be depressed?

The author also gives an interesting thought on "information overload:"

The more we read (or see or hear, for that matter), the more we think we know. But, as has long been observed, that isn't necessarily so. Often what happens is that we don't grow more informed; we just grow more confident.

Summaries of information, for instance, often work as well as - and sometimes even better than - longer versions of the same material. In a series of experiments, researchers at Carnegie Mellon University compared five-thousand-word chapters from college textbooks with one-thousand-word summaries of those chapters. The textbooks varied in subject: Russian history, African geography, macroeconomics. But the subject made no difference: in all cases, the summaries worked better. When students were given the same amount of time with each - twenty to thirty minutes - they learned more from the summaries than they did from the chapters. This was true whether the students were tested twenty minutes after they read the material or one year later. In either case, those who read the summaries recalled more than those read the chapters. (So if you relied on CliffsNotes in college, take heart.)

But deep down we don't want to believe this. We seem to have an innate desire to overload ourselves with information - whether it helps us or not. Indeed, "information overload" has become a cliché. Information is constantly piped to us - from the video screen in the back of the cab to the TV in front of the StairMaster to the keyword alerts that pop up on our email. We all crave information, though nowhere, perhaps, is it prized so much as at the racetrack.

I'd have to admit - I did this in college...because it was easy and saves me time. And, after discovering the C's materials, a feeling of "not enough time" seemed to motivate me to "skim" things. No wonder how I missed certain things in my re-readings over the years - I actually skimmed without knowing about it and only focused on certain sections that appealed to me at the time. Geez...

Chapter 11: We'd Rather Wing It

→ The author discussed how the track records of the "professionals" are not what they are cracked up to be. In other words, the author wrote: "The psychologists' diagnoses were no better than their secretaries." And, when confronted with the evidence to the contrary, these professionals "convinced" themselves that they were right.

Chapter 12: We Don't Constrain Ourselves

→ A constraint as a way to reduce errors is discussed here. They simply are mental aids to keep us on the right track by limiting our alternatives.

And, here's a section on "Looking for Root Causes:"

pages 189-90 said:
As we saw from the overdose of the Quaid children, mistakes attributed to human error often have deeper roots elsewhere. This is one reason why we so often fail to learn from our mistakes: we haven't understood their root causes.

Ferreting out the root cause is not always easy. Like the leak that appears on our living room ceiling, the source of the problem may lie far from the spot where we notice it. In case of human error, root cause analysis requires a deep understanding of human motivation. As we have seen in previous chapters, we believe we will act in one way, but often act in another - even in ways that would appear to be against our own self-interest. Our judgments may be distorted by overconfidence or by hindsight or by any of the other tendencies we're talked about.

People who are serious about eliminating errors would do well to keep this in mind.

Chapter 13: The Grass Does Look Greener

→ This is where the author talked about how we have been shown to systematically mis-predict how we will feel in the future after experiencing a small number of life events. This is why gift cards are "such a bad idea for you...but a terrific idea for the companies that issue them" (p. 203).
 
Back
Top Bottom