healthy food may be a poor choice; eating junk food is worse. You might feel a small stab of regret over not raising your hand in class to give the correct answer, but raise your hand and provide the wrong answer and you feel much worse.
Psychologists have found that people view inaction as less causal, less blameworthy, and less harmful than action even when the outcomes are the same or worse. Doctors subscribe to this philosophy. The first principle imparted to all medical students is “Do no harm.” It’s not, pointedly, “Do some good.” Our legal system draws a similar distinction, seldom assigning an affirmative
duty
to rescue. Submerge someone in water and you’re in trouble.Stand idly by while someone flails in the pool before drowning and—unless you’re the lifeguard or a doctor—you won’t be charged with failing to rescue that person.
In business, we see the sameomission bias. When is a stockbroker in bigger trouble? When she neglects to buy a winning stock and, say, misses getting in on the Google IPO? Or when she invests in a dog, buying shares of Lehman Brothers with your retirement nest egg? Ask hedge fund managers and, at least in private, they’ll confess that losing a client’s money on a wrong pick gets them fired far more easily than missing out on the year’s big winner. And they act accordingly.
In most large companies, managers are obsessed with avoiding actual errors rather than with missing opportunities. Errors of commission are often attributed to an individual, and responsibility is assigned. People rarely are held accountable for failing to act, though those errors can be just as costly. AsJeff Bezos, the founder of Amazon, put it during a 2009 management conference: “People overfocus on errors of commission. Companies overemphasize how expensive failure’s going to be. Failure’s not that expensive.… The big cost that most companies incur is much harder to notice, and those are errors of omission.”
This same thinking extends to sports officials. When referees are trained and evaluated in the NBA, they are told that there are four basic kinds of calls: correct calls, incorrect calls, correct noncalls, and incorrect noncalls. The goal, of course, is to be correct on every call and noncall. But if you make a call, you’d better be right. “It’s late in the game and, let’s say, there’s goaltending and you miss it. That’s an incorrect noncall and that’s bad,” saysGary Benson, an NBA ref for 17 years. “But let’s say it’s late in the game and you call goaltending on a play and the replay shows it was an incorrect call. That’s when you’re in a
really
deep mess.” *
Especially during crucial intervals, officials often take pains not to insinuate themselves into the game. In the NBA, there’s anunwritten directive: “When the game steps up, you step down.” “As much as possible, you gotta let the players determine who wins and loses,” saysTed Bernhardt, another longtime NBA ref. “It’s one of the first things you learn on the job. The fans didn’t come to see you. They came to see the athletes.”
It’s a noble objective, but it expresses an unmistakable
bias
, and one could argue that it is worse than the normal, random mistakes officials make during a game. Random referee errors, though annoying, can’t be predicted and tend to balance out over time, not favoring one team over the other. With random errors, the system can’t be gamed. A systematic
bias
is different, conferring a clear advantage (or disadvantage) on one type of player or team over another and enabling us—to say nothing of savvy teams, players, coaches, executives, and, yes, gamblers—to predict who will benefit from the officiating in which circumstances. As fans, sure, we want games to be officiated accurately, but what we should
really
want is for games to be officiated without bias. Yet that’s not the case.
Start withbaseball. In 2007, Major League Baseball’s
Desiree Holt, Brynn Paulin, Ashley Ladd