That's an expression I know as a former military wife that explains the need to accept training deaths. I have enormous difficulty with the idea that we really need to accept training deaths in self driving cars.
Cruise and Waymo have been as cautious as they can reasonably expect to be. They both have months long training courses for their safety drivers, and camera in the vehicle that monitor driver attention. While the vehicles are still challenged by complex scenarios, basic object detection and emergency braking are pretty good. Neither has had an at-fault accident, excluding an ambiguous but non-injurious incident between a Cruise car and a lane splitting motorcyclist.
Having a highly trained, standing army is part of what keeps us out of war. Without it, we would soon be invaded and taken over. Historically, every time America shrinks its military, it winds up dragged into a war and having to ramp back up rapidly.
There is no similar need for driverless cars. I'm American. I've lived without a car for over a decade.
I don't see why we can't do more testing before putting lives at stake. I don't see why driverless cars killing people as part of their development is a thing we need to accept.
Your assertion in no way clears up for me why we should shrug at the idea of people dying for this thing.
Having highly trained autonomous vehicles will keep us out of car crashes, currently we just sort of lie down and take it. As though we had 10 9/11 scale tragedies every year and just kind of threw our arms up and said "whatever, nuthin' we can do"
And for the record I don't drive either, haven't since I was a young man.
I don't personally see that as justification for accepting training deaths here.
I'm not talking about stopping the development of driverless vehicles. I'm just telling you that your argument seems like a non sequitur to me.
There are training deaths in the military precisely because humans are being trained to do dangerous things. Why can't driverless vehicles be trained without killing people?
If a testing plan was going to accelerate the adoption of autonomous cars which are twice as safe as human drivers by one month it would save the lives of approximately 40,000/2 / 12 = 1,666.6 people. To stop the aforementioned testing plan because it is expected to kill 1, 10, or even 100 people will increase the expected number of premature deaths.
There is of course an optimum between training deaths vs. future lives saved, accounting for uncertainty, and etc. but the number of expected training deaths will never be zero and considering the large number of future deaths currently expected, the optimum is likely to be much higher than zero.
How sure are you that the more aggressive development program will lead to faster adoption? If you're talking about a hypothetical program where we are discussing adjusting the acceptable risk level like a slider on an RTS game, then fine, but in practice it might not be like that.
In the real world a more aggressive program might be that way not because someone carefully dialed in the optimum risk, but because of the psychology and attitude of its executives and the same factors might lead to slipshod engineering, ultimately slowing down progress
Additionally, bad press from the resultant fatalities could create a political backlash.
I don't know if this is actually the case, but Waymo comes across as one of the more careful and responsible programs and they seem to have the best engineering and have made the most progress. We don't need 'move fast and break things' in this field. I'd argue we probably don't need it in some other fields as well, but that's a different discussion.
> If a testing plan was going to accelerate the adoption of autonomous cars which are twice as safe as human drivers by one month it would save the lives of approximately 40,000/2 / 12 = 1,666.6 people.
This is being overly generous with the assumption that self-driving cars will be safer than human drivers, to the point of being potentially dangerous.
I say potentially dangerous, because this generosity in your hypothetical is being used to justify deaths that need to happen in order to stop nebulous deaths in the future with technology that might not be as safe as your hypothetical assumes.
Is it really any more potentially dangerous than the inverse - being so risk adverse that we'd refuse to probably sacrifice a few to possibly save the many? All we can do is try optimize our expected value given our current understanding with a reasonable risk aversion.
With cars an improvement to just 90% of the current deaths in the US alone (36k vs 40k) would literally justify running down 10 people a day. The numbers are uncomfortable, sure, but they don't lie and while this is a simplistic analysis I don't see where it is qualitatively incorrect: cars kill so many people even a moderate improvement would be a massive decrease in mortality.
For a comparable situation, see research into emergency medicine. It is impossible to get consent, people likely have/will die as a result of trials, and yet they are (judged to be) in the common good, despite some very reasonable reservations.
Seat belts kill. They occasionally strangle people.
Yet they save far more than they kill, so we not only use them, but many countries mandate their use by law knowing people will die as a result.
It is morally justified because we're not sacrificing a known subset of people to save another known subset of people that don't overlap - we're sacrificing a small random subset of people to save a larger random subset drawn from the same larger set, and so reducing the chance of harm to all, rather than transferring it.
This distinction is key, and would reject most "random murders" you might propose.
Which is still under the assumption that there's some orders-of-magnitude-safer-AI-driving-paradise. With seatbelts, we know how many people are saved by them vs. killed by them. With AI driving, we are merely guessing.
It doesn't need to be "some orders-of-magnitude-safer-AI-driving-paradise". If it improves on human driving by 10%, it'd still save thousands of lives a year.
This is still very generous, given that we're many orders of magnitude away from even being close to being as safe as human drivers when it comes to autonomous driving.
Coming from a crowd that is probably intimately familiar with the limitations of Google Assistant's ability to understand the English language, I feel like we're being overly optimistic here.
I don't doubt that eventually we will be able to create safe autonomous vehicles, in the same way that eventually we will be able to treat cancers much more effectively than we do today.
However, I find it odd that posters are not applying same
level of optimism to other fields, nor applying the same level of skepticism to this field that they would apply to something like cancer research. Especially given that a breakthrough in cancer treatment could effectively save tens of millions of lives annually, versus the one million lives that could be saved if we completely eliminated automotive deaths. Which, again, is a moonshot given that autonomous processes in other industries still have an annual death toll.
Unfortunately, we don't know it that it will prevent any deaths. Some people hope that it will, but there's no hard data to show that programs with more deaths progress faster. The outcome could very well be that this strategy prevents 0 deaths and cause 10.
The road to hell is paved with good intentions, and the most dangerous path is the one in which the ends justify the means. If you don't achieve the ends... then you have nothing but tragedy and sorrow.
You're right, but just as a correction on your numbers, you're vastly underestimating the amount of potential lives saved because of the implicit assumptions that humans only exist in the USA, and that self-driving technology developed in the USA will only benefit Americans when it comes to safety.
There's more than 1 million road fatalities yearly according to the WHO[1]. Once the technology is developed it'll be rapidly exported, just in the EU there's around 25k deaths/yr[2], another 4k in Japan[3] etc.
So even if you only include countries and other areas with a similar GDP as the US (which could purchase self-driving vehicles at a similar rate) you easily get upwards of 100k deaths/yr.
As a non-American I still think that underestimating is the right approach for the simple reason that it pre-empts a "but why should we pay with [whichever country you use] lives" argument from a certain subset. Restricting the estimate to "number of lives saved in the country the testing is done" gives an outcome that is much harder to argue against.
it's not unreasonable given the same opportunity cost argument that most people will switch to and prefer autonomous vehicles if they are shown to be inherently safer more predictable drivers. This is a huge risk to long haul truck drivers since those jobs are already long and grueling as it is. Their livelihood is threatened by the fact that robot truck drivers don't sleep. It's not a matter of if it will be replaced but when, since logistics are a huge business and nothing to mess around with.
I have no doubt that truck drivers jobs are under threat.
But you are justifying this assumption with yet more assumptions, not hard data. For example: If they are proven to be inherently safer.
I'm literally just rather tired at the moment and also finding this argument wearying. But actual reality has a really long track record of failing to conform to human predictions of that sort.
When antibiotics were discovered, it was predicted to be the end of human disease. Fast forward to today and the articles we routinely read are about the crisis of antibiotic shortages, antibiotic resistant infections and whatever will do now?
When the first air planes came out, they had square windows. So did the first jets -- until they began falling from the sky as if some Cthulhoid horror had ripped then to pieces in the sky. Then they changed the windows to rounded designs.
Human ability to accurately predict the future is notoriously lousy.
Because it’s impossible to guarantee that an algorithm will be 100% effective in preventing deaths. So, you either are for driverless vehicles and are willing to accept some risk, or you are against driverless vehicles driving on roads. It has to be one or the other.
You can fail to be against something without being for it.
I find it strange that people seem to have a problem with my question but no one but me seems to have a problem with someone simultaneously claiming that we need to treat training deaths as shrug-worthy while assuming that driverless vehicles will clearly eliminate huge numbers of deaths annually once they are out there.
This is the first suggestion I have heard that human drivers actively need to be eliminated instead of driverless vehicles being yet one more option in an increasingly diverse transportation ecosystem.
Although I no longer have a driver's license, the implication that someone might desire to outlaw human drivers someday for supposed safety reasons while simultaneously justifying accepting death by driverless vehicle seems somewhat disturbing.
“You can fail to be against something without being for it.”
Not sure what that means, but testing autonomous vehicles will result in deaths. So, you either decide zero deaths are acceptable and don’t test, or allow some deaths and test. There is no third option in this case
Typically, when we test experimental products to gauge their safety and efficacy, we engage in informed consent with the participating parties.
If everyone involved knew the risks, accepted them and moved forward, I'd agree with your premise.
However, if I am walking down the street and I'm run over by a rogue autonomous vehicle, I didn't give consent.
I don't think anyone would be as blasé as they are with autonomous vehicle deaths in a different situation.
For example, if a potential cure for heart disease was tested by dumping it in the public water supply causing people to die, would we have posters here saying that testing such drugs will result in deaths and shrug them off?
> If everyone involved knew the risks, accepted them and moved forward, I'd agree with your premise.
Do you consent to the risks of letting 16 year olds drive? They're high, but you don't get the option.
I understand not agreeing to extremely-risky self-driving cars.
But if they can beat the very lax standards we use to license humans, that should be good enough.
Requiring them to be infinitely more perfect than humans is nonsense. If a car drives you over, do you really care if it was a robot or a human driving? I don't. The most I can ask for is a universal bar. And all the evidence I've seen is that Waymo is meeting that bar.
> For example
People would object because that's a stupid way to test and unrelated to the job of delivering safe water. If you want to talk about real water treatments, we do make tradeoffs!
> Do you consent to the risks of letting 16 year olds drive? They're high, but you don't get the option.
There's a very low bar for them to pass to have their driving privileges revoked if they prove themselves to be a danger.
Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.
> But if they can beat the very lax standards we use to license humans, that should be good enough.
"If". We have some very lax standards for what we consider intelligible English, yet Alexa can't set a timer correctly when I tell it to.
> Requiring them to be infinitely more perfect than humans is nonsense.
Who is proposing this?
> If a car drives you over, do you really care if it was a robot or a human driving? I don't. The most I can ask for is a universal bar.
Do you apply this accident causation blindness universally? Do you care if a person that hits you was drunk or lacked a driver's license vs driving diligently and licensed?
> People would object because that's a stupid way to test and unrelated to the job of delivering safe water. If you want to talk about real water treatments, we do make tradeoffs!
Some people might object to allowing unproven autonomous vehicles onto the street as stupid, but choose not to use that word in effort to have a respectful discussion.
> Typically, when we test experimental products to gauge their safety and efficacy, we engage in informed consent with the participating parties.
The state consented on your behalf. You, in fact, automatically "consented" to all sorts of dangerous and dubious experiments, including democracy itself, when you became a resident. Though the entire idea that self-driving cars are dangerous and experimental has no basis in reality and by all accounts Waymo's cars are ridiculously safe, even if it were the case that they were dangerous Waymo is operating with the full blessings of the Arizona government.
> Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.
Of course society does not "necessitate" anything. Society is not some natural phenomenon like gravity that operates in necessity. And there are many, many people who would point out that they do not agree with and certainly do not consent to America's dangerous obsession with car ownership that kills 50k Americans a year and has tremendous economic and ecological consequences. But alas.
The social contract is not carte blanche allowance for anything to happen. There's a feedback loop involved, in which the governed can give or revoke consent.
> Society is not some natural phenomenon like gravity that operates in necessity.
However, people are driven by natural phenomenon like the conservation of energy, and thus need to eat. For most people in the US, if they want to eat, it is necessary to drive to work.
> And there are many, many people who would point out that they do not agree with and certainly do not consent to America's dangerous obsession
> There's a very low bar for them to pass to have their driving privileges revoked if they prove themselves to be a danger.
Robot privileges can be revoked too.
> Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.
Society necessitates that people use cars to get places. You can 1:1 replace human driving hours with autonomous driving hours.
>> Requiring them to be infinitely more perfect than humans is nonsense.
> Who is proposing this?
Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.
> Do you apply this accident causation blindness universally? Do you care if a person that hits you was drunk or lacked a driver's license vs driving diligently and licensed?
Being drunk alters your ability to drive. They would be under the bar.
If someone lacks a license but would have qualified, I guess I don't really care.
> Some people might object to allowing unproven autonomous vehicles onto the street as stupid, but choose not to use that word in effort to have a respectful discussion.
In a way, we're discussing that right now. We're in a thread filled with posters who do not want to revoke those rights on the off chance that more dead people now will prevent even more people from dying in the future.
> Society necessitates that people use cars to get places. You can 1:1 replace human driving hours with autonomous driving hours.
This is a generous hypothetical. Society certainly necessitates that people drive, as there is no other way.
It is not true to say that we can 1:1 replace human driving with autonomous driving, the article in the OP is evidence of this. The chance that autonomous driving will never reach a 1:1 parity with humans is also just as likely.
> Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.
If this is your takeaway, I implore you to give this perspective more than a passing thought so that you can reply without turning it into a straw man argument.
> Being drunk alters your ability to drive. They would be under the bar.
I'm not sure what you're trying to say here, can you clarify?
> If someone lacks a license but would have qualified, I guess I don't really care.
Would you care if they qualified, but had their license revoked, perhaps for hitting people with their car before they hit you?
> All drivers are unproven at first.
Thankfully, we train and test these drivers on closed courses where injury to uninvolved people is minimized before we allow them to go on the open road. We both severely supervise and restrict why, when, how and what they can drive.
> In a way, we're discussing that right now. We're in a thread filled with posters who do not want to revoke those rights on the off chance that more dead people now will prevent even more people from dying in the future.
Some people are willing to trade more deaths now for fewer deaths later. But don't take that as proof that waymo's cars actually will cause more deaths. They've been pretty safe so far.
I'm not arguing that more deaths are acceptable, I'm arguing that some deaths are acceptable if we're going to be consistent with current road policies.
> It is not true to say that we can 1:1 replace human driving with autonomous driving
You misunderstood the 1:1. I mean that you can take particular driving hours and replace them 1:1. That's what the article is about, even. I'm not claiming it will replace all human driving.
> If this is your takeaway, I implore you to give this perspective more than a passing thought so that you can reply without turning it into a straw man argument.
It seems pretty simple to me. "Are you willing to allow self-driving cars that will kill people, if the number of deaths per mile is under some threshold?" What am I missing? I don't want to strawman people, I just want a realistic assessment of risk.
> Would you care if they qualified, but had their license revoked, perhaps for hitting people with their car before they hit you?
Yes, because it means they went under the bar...
> Thankfully, we train and test these drivers on closed courses where injury to uninvolved people is minimized before we allow them to go on the open road.
Your experience is very different from mine. I trained entirely in public areas. I don't even know where I could find a closed course.
Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.
That's a distortion of what I said. Furthermore, it's pretty laughable to have my internet comment treated like some kind of legally enforceable policy.
Last I checked, I'm not Queen of the world whose word is law.
Why should it be disturbing?
For me would be far more disturbing to have a safety proven and affordable self driving car that doesn’t drive carelessly causing accidents and to allow humans to continue to drive, at cost of tens of thousands lives per year only in the US.
We discuss privacy issues and the like daily on HN. If software is driving your car, does someone have access to the data on where you go? Can your car be shut down or driven to the nearest police station by a third party? If you are Black, gay or any number of other things, are you cool with giving up such control in an openly hostile social climate? How much cost does it add to the car? If a software update is buggy and you not only can't drive, but it is illegal for a human to drive, does your wife give birth at home while we wait for Google to fix the bug and update the software because you are neither allowed to drive her to the hospital nor is there any such thing as human ambulance drivers anymore?
If it can save 40k lives per year it’s in any case a no-brainer.
Ask all the millions of people that have a relative killed by a car if they would care at all.
Edit: also it is pretty curious that you are against testing self driving cars because they might kill someone during the testing, while you are perfectly fine with 40k people killed per year and you are concerned about the privacy of the people.
I’ll tell you a secret. A dead person couldn’t care less about his privacy.
You know, someone made a real cavalier sounding remark about how you need to break a few eggs to make an omelette. I replied to that with saying, basically, I understand that attitude for making peace with training deaths in the military but I don't think it's justified for driverless cars. I tried to make it clear later that part of the difference in my mind is that people die in military training because people are being trained to do dangerous things. But if you are training a driverless vehicle, there's really no reason that absolutely has to involve endangering anyone's life.
And, wow, has that gotten tons of push back while people go to great lengths to frame me like I'm some extremist lunatic. Meanwhile, the person cavalierly brushing off training deaths is making rather extreme comments about how driverless vehicles can completely replace all human drivers, etc and most people are not arguing with that. No, I am the one being argued with.
It's starting to look to me like people are basically looking for some silly reason to argue with me in specific. Because I really did not assert a lot of the stuff being hung on me here.
Again, yes, if we can save 40k lives. That's a very big if. It assumes a 100% reduction in mortality. That implies that you expect driverless vehicles to not merely be better than human drivers, you expect them to be perfect and to have flawless performance.
And it's that sort of ridiculous unstated assumption that has me rolling my eyes and going "Wow, people on HN sure are just looking for crazy reasons to argue with me." Because I don't think that's a remotely defensible position.
Yes, because you keep making this statement that it’s not necessary to endanger lives when testing driverless cars. That statement is false. Endangering some lives is a necessary condition for testing driverless cars. Now, maybe we shouldn’t test them and that’s fine, but you are trying to have it both ways.
> If it can save 40k lives per year it’s in any case a no-brainer.
This is a pretty big assumption without any evidence to support it.
There are many solutions that can potentially save even more lives, such as treatments and cures for heart disease and cancer.
However, I do not see anyone arguing to test these potentially life saving miracles on random people who happen to be walking down the street, like we are with autonomous vehicles.
Certainly, if we relaxed standards on testing cancer and heart disease treatments, we'd rapidly accelerate the development of life-saving cures. The more people we test them on, the better data we'll have to build better models, much like with autonomous driving.
If it can save 500k lives per year from cancer and heart disease, would revoking the need for informed consent to test these potential cures be a no-brainer?
> If it can save 40k lives per year it’s in any case a no-brainer.
Just saying that doesn't absolve you from making an actual argument.
> Ask all the millions of people that have a relative killed by a car if they would care at all.
How motherfucking dare you! My father did die in a car crash when I was a kid. But I also live in a country where totalitarianism actually happened. You should wash your mouth, and then you should sit down and make the argument.
Because to reply to all of it, including
> "If you are Black, gay or any number of other things, are you cool with giving up such control in an openly hostile social climate?"
with
> "If it can save 40k lives per year it’s in any case a no-brainer."
is absolutely not good enough. Would you be okay with that being quoted "out of context" like that (it wouldn't really be, it's the degree of seriousness you decided to muster) like that on billboards with your real name attached to it?
I'm not defending Tesla or Uber, they're the ones that have killed people, and I think they both were (and in Tesla's case still are) behaving irresponsibly.
Cruise and Waymo have both done tons of closed course testing, that's where they validate their respective systems against mission critical stuff like knowing when to slam on the brakes. But eventually they've got to go out into the real world and learn to deal with real traffic on real roads. I wish I could tell you the risk was zero, but it isn't and never will be.
> Cruise and Waymo have both done tons of closed course testing, that's where they validate their respective systems against mission critical stuff. But eventually they've got to go out into the real world and learn to deal with real traffic on real roads.
Waymo has been doing that longer than anyone, though.
Technically Waymo has been acting like cowboys longer than anyone. They smartened up and got serious about safety around 2015, but before that they were winging it. It was just a goofy science experiment back then.
The Google self-drivng car project was under the leadership of Sebastien Thrun and Anthony Levandowski, and as hardcore engineering types their thinking was 'We'll save more people than we kill', and it was as simple as that.
Well, to my ear that sounds like "People will still die in car wrecks and I don't even have data substantiating my assumption that fewer people will die than is true currently, but I somehow feel that if robotic vehicles are killing people, that's automatically better than if a human is behind the wheel."
We wouldn't be doing our due diligence if we didn't try. Otherwise it's pretty much a guarantee that 30-40,000 people will die on America's roads next year and every year after that.
It's an almost nonsensical question, that's why you're having trouble getting people to answer it.
People are saying "a small number of people might die and no one wants that, but it's impractical-to-impossible to guarantee that zero people will die." And you are asking, "but why? But why?"
Now and then there at washing machine deaths. Society accepts these because they're so rare and because it would be impractical to completely prevent them.
Compared to the number of road deaths most experts believe will be prevented over time, the few deaths we may encounter from training deaths seems completely inconsequential.
Acting like my question is dumb because the first however many replies to my comment made no real effort to actually answer it isn't a good faith argument and veers rather close to a personal attack.
> Now and then there at washing machine deaths. Society accepts these because they're so rare and because it would be impractical to completely prevent them.
Are you sure about that? Society just accepts them and moves on?
There aren't lawsuits? There aren't recalls? There aren't redesigns? There aren't safety measures taken so deaths don't happen again? There aren't investigations? Fines aren't levied if they violated regulations? Regulations aren't passed in response? Everyone just rolls over and says, "This is just the price of washing clothes" like we are with autonomous cars?
That person you're talking about, that says autonomous cars are just going to cause deaths and it's okay and we shouldn't try to investigate or improve safety measures or check if regulations were violated? That person doesn't exist.
Accepting that accidents will always happen does not imply you learn nothing and improve nothing.
I agree, beyond a select few individuals who have not posted on HN at all, I do not see or believe that anyone is calling for unnecessary deaths if they can be avoided.
I do believe that we're letting optimism and good intentions get the better of us, by allowing our interests in the betterment of the humanity to align with the interests of business, which would like to see autonomous cars on the road unencumbered, unregulated and unquestioned as soon as humanly possible.
Hence why I am advocating for a level of healthy skepticism. I am imploring posters who are taking it on faith that autonomous vehicles will solve the problem of automotive deaths if we suspend our disbelief, to apply the same level of skepticism to this field as they would, say, biotech.
There's no way to possibly guarantee 100% safety so proposing it as the standard is as good as killing off the program.
What's that classic example, robocar has to make a choice between (potentially) killing a bus full of school kids or a bunch of adults standing around on the sidewalk.
> why we should shrug at the idea of people dying for this thing
I was going to say "nobody's shrugging", but then I remembered the killing of Elaine Herzberg by the Uber self-driving car. Maybe the wheels of justice are turning slowly, but right now it looks awfully similar to the Government just shrugging at that incident.
MADD was effective at getting drunk driving legislation passed in the states, and likes to claim responsibility for decreasing automotive deaths by one half.
There are people out there who aren't shrugging their shoulders and are actively doing something about people dying in cars.
30,000+ people die in cars every year in the United States. That's roughly the same number killed by guns but how many news stories do you see about car deaths?
Someone left this comment and apparently deleted it while I was looking for and failing to find citations for actual numbers of lives saved by ambulance, for example. I have redacted their handle, but I want to leave my reply here:
Because people's lives are at stake right now.
Every year 10s of thousands of people die due to cars.
Delaying innovation that would prevent deaths dooms those 10s of thousands of people to death in the future.
You say that as if, clearly, cars never save lives. It's all downside.
Ambulances save lives.
Fire trucks save lives.
Those are the easy, obvious answers. But I would argue that lives are also saved and enhanced by access to jobs, access to better quality food because of our complicated infrastructure, access to better medical care, etc.
You aren't counting when things go right. You are only counting when they go wrong.
There are fatal accidents caused by ambulances and fire trucks, do you want to outlaw them because you cannot guarantee that there won’t be a fatal accident during their use?
Well we accept deaths in cars... right now it's hard to tell the number but based on what Tesla said, it seems like self-driving cars stats are similar to normal cars.
Ignoring self-driving car is essentially accepting deaths in non-self-driving car just as much...
I sure hope you never argued that we should stop people from driving altogether...
Every large infrastructure project, such as bridges etc has an expected number of death. We know this and accept this is a sacrifice we can live with. Self driving cars should be seen as no different. And once it is safer than human drivers it will pay back those lives.
Why do we have to accept preventable deaths of any kind?
We can start banning all sorts of things to reduce car deaths.
Lower the highest speed limit to 35mph.
Ban having a cell phone in your car, because access to it is distracting.
Ban listening to music in your car because that is distracting.
Why focus on slowing progress on the best thing for road safety - self-driving cars, while letting humans risk their lives all day and night due to their own negligence?