IN MEDICINE, FALSE positives are expensive, scary, and even painful. Yes, the doctor eventually tells you that the follow-up biopsy after that bloop on the mammogram puts you in the clear. But the intervening weeks are excruciating. A false negative is no better: “Go home, you’re fine, those headaches are nothing to worry about.”

Anyone who builds detection systems—medical tests, security screening equipment, or the software that makes self-driving cars perceive and evaluate their surroundings—is aware of (and afraid of) both types of scenarios. The problem with avoiding both false positives and negatives, though, is that the more you do to get away from one, the closer you get to the other.

Now, fresh details from Uber’s fatal self-driving car crash in March underscore not just the difficulty of this problem, but its centrality.

According to a preliminary report released by the National Transportation Safety Board last week, Uber’s system detected pedestrian Elaine Herzberg six seconds before striking and killing her. It identified her as an unknown object, then a vehicle, then finally a bicycle. (She was pushing a bike, so close enough.) About a second before the crash, the system determined it needed to slam on the brakes. But Uber hadn’t set up its system to act on that decision, the NTSB explained in the report. The engineers prevented their car from making that call on its own “to reduce the potential for erratic vehicle behavior.” (The company relied on the car’s human operator to avoid crashes, which is a whole separate problem.)

Uber’s engineers decided not to let the car auto-brake because they were worried the system would overreact to things that were unimportant, or not there at all. They were, in other words, very worried about false positives.

Self-driving car sensors have been known to misinterpret steam, car exhaust, or scraps of cardboard as obstacles akin to concrete medians. They have mistaken a person standing idle on the sidewalk for one preparing to leap into the road. Getting such things wrong does more than burn through brake pads and make passengers queasy.

“False positives are really dangerous,” says Ed Olson, the founder of the self-driving shuttle company May Mobility. “A car that’s slamming on the brakes unexpectedly is likely to get into wrecks.”

But developers can also do too much to avoid false positives, inadvertently teaching their software to filter out vital data. Take Tesla’s Autopilot, which keeps the car in its lane and away from other vehicles. To avoid braking every time its radar sensors spot a highway sign or discarded hubcap (the false positive), the semi-autonomous system filters out anything that’s not moving. That’s why it can’t see stopped firetrucks—two of which have been hit by Teslas driving at highway speed in the last few months. That’s your false negative.

True or False

Striking the balance between ignoring what doesn’t matter and recognizing what does is all about adjusting the “knobs” on the algorithms that make self-driving software go. You adjust how your system classifies and reacts to what it sees, testing and retesting the results against collected data.

Like any engineering problem, it’s about tradeoffs. “You’re forced to make compromises,” says Olson. For many self-driving developers, the answer has been to make the car a touch too cautious, more grandma puttering along in her Cadillac than a 16-year-old showing off the Camaro he got for his birthday.