Read an extract from Mindware, Richard Nisbett’s practical guide to the most powerful tools of reasoning ever developed.

Even when we’re highly knowledgeable about some domain, and highly knowledgeable about statistics, we’re likely to forget the concept of variability and the relevance of the law of large numbers. The Department of Psychology at the University of Michigan interviews its top applicants for graduate study before it makes a final decision as to whether to admit a student. My colleagues tend to put substantial weight on their twenty- to thirty-minute interviews with each candidate. “I don’t think she’s a good bet. She didn’t seem to be very engaged with the issues we were discussing.” “He looks like a pretty sure thing to me. He told me about his excellent honors thesis and made it clear he really understands how to do research.”

The problem here is that judgments about a person based on small samples of behavior are being allowed to weigh significantly against the balance of a much larger amount of evidence, including college grade point average, which summarizes behavior over four years in thirty or more academic courses, Graduate Record Exam (GRE) scores (which are in part a reflection of how much a person has learned over twelve years of schooling and in part a reflection of general intellectual ability), and letters of recommendation, which typically are based on many hours of contact with the student. In fact, the correlation between Grade Point Average (GPA) and performance in college has been shown to predict performance in graduate school to a significant degree (a correlation of about .3, which is rather modest), and GRE scores also predict to about the same degree. And the two are somewhat independent of each other, so using them both improves prediction above the level of each separately. Using letters of recommendation adds yet a bit more to the accuracy of predictions.

But predictions based on the half-hour interview have been shown to correlate less than .10 with performance ratings of undergraduate and graduate students, as well as with performance ratings for army officers, businesspeople, medical students, Peace Corps volunteers, and every other category of people that has ever been examined. That’s a pretty pathetic degree of prediction—not much better than a coin toss. It wouldn’t be so bad if people gave the interview as much weight as it deserves, which is little more than to let it be a tiebreaker, but people characteristically undermine the accuracy of their predictions by overweighting the value of the interview relative to the value of other, more substantial information.

In fact, people overweight the value of the interview so much that they’re likely to get things backward. They think an interview is a better guide to academic performance in college than is high school GPA, and they think an interview is a better guide to quality of performance in the Peace Corps than letters of recommendation based on many hours of observation of the candidate.

To bring home the lesson of the interview data: Given a case where there is significant, presumably valuable, information about candidates for school or a job that can be obtained by looking at the folder, you are better off not interviewing candidates. If you could weight the interview as little as it deserves, that wouldn’t be true, but it’s almost impossible not to overweight it because we tend to be so unjustifiably confident that our observations give us very good information about a person’s abilities and traits.

It’s as if we regard the impression we have of someone we’ve interviewed as resulting from an examination of a hologram of the person—a little smaller and fuzzier to be sure, but nevertheless a representation of the whole person. We ought to be thinking about the interview as a very small, fragmentary, and quite possibly biased sample of all the information that exists about the person. Think of the blind men and the elephant, and try to force yourself to believe you’re one of those blind men.

Note that the interview illusion and the fundamental attribution error (making overly confident ascriptions of traits and abilities because we have ignored contextual influences on behavior) are cut from the same cloth, and both are amplified by our failure to pay sufficient attention to the quantity of evidence that we have about a person. A better comprehension of the fundamental attribution error would lead us to be dubious about how much we can learn from an interview. A firmer grasp on the law of large numbers makes us less vulnerable to both errors.

I wish I could say that my knowledge of the utility of interviews al- ways makes me skeptical about the validity of my own conclusions based on interviews. My understanding of that principle has a limited dampen- ing effect, however. The illusion that I have valuable and trustworthy knowledge is too powerful. I just have to remind myself not to weight the interview—or any brief exposure to a person—very heavily. This is especially important when I have presumably solid information about the person based on other people’s opinions formed after long acquaintance with the candidate, as well as records of school or job achievements.

I have no difficulty, however, in remembering your limitations based on a brief interview!

Mindware coverExtract taken from Mindware, Richard Nisbett (Allen Lane).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Authors, Exclusive, Extracts, Science, Uncategorized


, , , , , , , , ,