Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Waymo spends a lot on human oversight, where remote operators make a lot of "common sense" decisions that don't require an immediate response and which AI is not yet capable of solving. Humans actually do a lot of the navigation, suggesting paths that the car can drive along. An example I saw was a fire truck parked at an odd angle, poking out into the street, and the software didn't know what it was or what to do. The operator drew a path around the truck for the car to follow. This only works for taxis: it would be impossible for Tesla to do this since there aren't enough human operators to hire.

But I suspect this means Waymo's software is ultimately more risk adverse. If a Tesla stops in the middle of the road then the customer has to take action, which is frustrating and makes the technology look bad, so there is strong incentive to remove that frustration even at the cost of safety. If a Waymo stops, the remote operator has to take action, and the customer can keep staring at their phone without being particularly affected - it just seems like the Waymo is "thinking."



Honestly, this is what a self-driving car should be. No interaction from the passenger required. Maybe eventually we'll be able to replace the human operator, but until then, risk averse AI where somebody remotely solves any unforeseen or unexpected issue is a decent compromise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: