Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if you can decrease the impact of (2) with a policy of phased rollout for updates. I.E. you never update the whole fleet simultaneously; you update a small percentage first and confirm no significant anomalies are observed before distributing the update more widely.


Ideally you'd selectively enable the updated policy on unoccupied trips on the way to pick someone up, or returning after a drop-off, such that errors (and resultant crashes) can be caught when the car is not occupied.


Presumably management would also be highly regional. Functionality in san francisco doesn't imply anything about functionality in oakland, etc.


One measure of robustness could be something like: the ability to resist correlation of failure states under environmental/internal shift. Danger: that under relevant time horizons the integral of injury-to-things-we-care-about is low. And then "safety", a combination: that the system resists correlating failure states in order to preserve a low expected value of injury.

The problem with machines-following-rules is that they're trivially susceptible to violations of this kind of safety. No doubt there are mitigations and strategies for minimising risk, but its not avoidable.

The danger in our risk assessment of machine systems is that we test them under non-adversarial conditions, and observe safety --- because they can quickly cause more injury than they have ever helped.

This is why we worry, of course, about "fluoride in the water" (, vaccines, etc.) and other such population-wide systems... this is the same sitation. A mass public health programme has the same risk profile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: