Robot drivers are supposed to be safer than humans. We would expect no less from machines equipped with sensors that afford expansive, acute, and unwavering perception of their surroundings, processors that meticulously analyze the possible paths ahead, and actuators that quickly and precisely execute the planned maneuvers. Sure, a robot might crash on occasion: say in a situation where the sensors are confused by snow and the processors are taxed by a mix of vehicles, cyclists, and pedestrians moving unpredictably, when suddenly a dog darts out onto the slippery street just as an unfamiliar, hard-to-identify object falls onto the pavement, finally overwhelming the robot’s capabilities. But we would hope that such all-but-unavoidable crashes would be few and far between. The fatal crash of a self-driving test vehicle on the night of March 18 did not fit that description. The sky in Tempe, Arizona was clear, the road was wide and free of traffic, streetlights illuminated the road; yet somehow, the robot driver did not manage to avoid a woman walking with her bicycle across Mill Avenue. The vehicle, from the test fleet of ride-hailing company Uber, made no attempt to brake, according to police. The human backup “safety driver” failed to correct the vehicle’s trajectory; it struck Elaine Herzberg at a speed of 38 mph. She died of her injuries later in hospital. Speaking to the San Francisco Chronicle, Tempe Chief of Police Sylvia Moir emphasized how Ms. Herzberg “came from the shadows right into the roadway”; this narrative was bolstered three days after the crash when the police released a low-quality dashcam video that gave a misleading impression of oppressively dark conditions. But even had the road been obscured in total darkness, the vehicle was outfitted with lidar, which can “see in the dark” — the sensor emits its own infrared light. By all appearances — though, to be sure, the National Transportation Safety Board is still investigating — Ms. Herzberg would still be alive today if the automated vehicle had been a merely competent driver. The Uber crash, then, was a miserable failure of technology. But paradoxically, it can also serve as a reminder that safe robot drivers are within closer reach than they may appear — as long as developers of the technology choose safety as the overriding priority. Admittedly, it may be a long road of technological advancements to a point where robots have prodigious driving skills, but that isn’t the only path to safe robot drivers. There is another route; perhaps less spectacular, but more direct.