Hacker News new | past | comments | ask | show | jobs | submit login

This is another chapter in the battle of ethical mindsets that I see playing over and over again in America, consequentialism vs deontology. From everything I've heard of Tesla self driving is that per-mile it is more safe than human control (consequentialism argument for self driving). However, there have been a few high profile accidents recently with emergency vehicles (deontological argument against self driving). You ultimately need to pick an ethical mindset to judge the viability of Tesla self driving. I think we have seen a trend in American culture recently to more of deontological ethics especially in places like San Francisco. I'm personally more towards consequentialism and therefore are pro Tesla self driving, if per-mile it is statistically safer than a human driver.



> Tesla self driving is that per-mile it is more safe than human control

You are mixing up autopilot on highway and FSD. And even then the numbers are comparing very easy stretches of highway driving where people feel comfortable activating autopilot to all driving miles.

Watch the videos on youtube for FSD beta. On most of them it does something or some things very wrong every couple miles and is saved by the hypervigilant driver. There is no way FSD is safer than human drivers right now. I could believe what you say is true about Waymo. But they are being much more responsible and it seems like SF doesn't have a problem with them.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control (consequentialism argument for self driving).

I think this might not be true! I looked at the numbers a few years ago, and Uber had a much higher fatality rate per mile driven then human US drivers. Waymo was better; I forget if it was better than human or not. I couldn't find comparable Tesla numbers --- only PR-style numbers that could be based on all sorts of weird definitions --- but I don't get the impression that they're at the head of the pack.

EDIT: The number to beat is around ~15 deaths per billion miles driven, on the road in the US.

https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


Tesla reports aggregated accident statistics publicly (for its Autopilot product, not this current round of FSD Beta testing obviously): https://www.tesla.com/VehicleSafetyReport

And indeed, they're better than human drivers. People will nitpick a little around the edges (e.g. cherry picking drivers of cars of the same age or in the same price range, pointing out the lack of specific numbers in various categories, arguing about the definition of "on autopilot"...), but the general truth is that yes: the "better than human" bar has already been cleared. We are not seeing the rate of accidents from this system that we would see if it was unsafe. Period.


> People will nitpick a little around the edges

Sorry what? That is a comparison of highway miles with Autopilot (driver felt comfortable activating autopilot in that stretch) vs all miles in any situation. That's not a nitpick, it's a completely apples and oranges comparison.

And that's autopilot. If you watch a video with FSD beta you can't honestly say it's better than any sober humans who have driven for longer than a week.


FSD beta has had no known accidents at all, though. The argument above was presuming that measurement and statistics can show whether a system is "safe" or not. If you refuse to accept that as an axiom I don't know how to reply.

The question in the linked article we're discussing isn't whether Tesla FSD is "finished" as a product, or perfect, or whatever. It's simply whether or not we should allow it to be tested in a wide beta on public roads.

And where's the evidence that we shouldn't?


> FSD beta has had no known accidents at all, though.

Do you have a source on Tesla FSD? I don't know much about it. The basic info we'd need is number of fatalities, number of miles driven, and to check that these numbers mean roughly the right thing.


You want a source on the absence of evidence? I don't have that. No one has reported an accident to a FSD beta car anywhere I am aware of. The source is me.


You are a fine source! So fatalities = 0. Any idea about miles driven, by Tesla FSD beta cars? Or is there no public info on that?

If it was human drivers (in the US), you'd expect a fatality by around the 70 million mile mark. And fatalities caused by self-driving cars are very newsworthy, so you'd hear about it.


> nitpick a little around the edges

It is not nitpicking around the edges when the foundation is shaky and opaque.


That's "nitpicking" though. Unless you want to claim the existence of a very large population of "autopilot accidents" not present in these reports, it's just not possible to construct a scenario where autopilot is significantly more dangerous than a human driver.

You're attempting to use uncertainty or controversy in the measurement mechanism to stand in for an argument for the contrary point, and that doesn't work logically.

It's not killing more people, basically. That's what "safe" means.


No, this is not correct. Tesla persists in comparing autopilot miles (primarily highway driving, only good weather, recent model cars) to all miles driven by all road users under all weather conditions.

Tesla cannot claim autopilot is safer than a human driver based on this type of comparison.

Still, they fooled you and many others in this thread, so I guess their marketing works.


Very much this is the reality. And moreover, people have been pointing this very, very obvious problem with the Tesla numbers out for years, and yet people continue to take Tesla's statements at face value. It is incredible to watch and the most solid demonstration of how weak the critical thinking skills are in the technology field.

For historical reasons, I have lots of friends who do what is now called "data science" professionally and literally every one of them has asked, unprompted, about whether the numbers Tesla reports are based on non-comparable sets, which they are, early in any conversation on Tesla FSD. It is totally obvious.


Next time people complain about “why do i need math class” here’s a great example. Unfortunately a huge majority of the reasons are for reasoning through advertising BS.


In my experience, the tesla crowd does not care, they'll just cite it in the next Tesla thread.


Do you have a source that constructs a story around existing knowns that does show AP to be unsafe, then? Because again, the deaths and accident counts simply aren't there. There's not enough of them to be "unsafe", no matter how you do the accounting.

That is, take the numbers we have and show that if you select out the right data set from public numbers about other kinds of driving, that the Tesla AP values are actually more dangerous. This surely isn't easy but it's absolutely doable (and, let's be clear, such an analysis would be worth a lot of money to Tesla's competitors -- there's a built-in incentive to do this). But no one has numbers like that anywhere I've seen. Likely because, as seems like the obvious hypothesis, there aren't enough events to make a story for a safety problem via any analysis.

So you are arguing about methodology and not results. That's pretty much the definition of "nitpicking". And it will never end. You'll insist that Tesla AP is somehow hiding phantom deaths for years, because it's your prior. But it's not supported, it's just not.


Tesla has this info but chooses not to share it.

However, here’s an analysis from a few years ago that should at least give you pause in claiming that Tesla’s are definitively safer:

https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


Wow, that was a good read. Thanks for sharing


>I'm personally more towards consequentialism and therefore are pro Tesla self driving, if per-mile it is statistically safer than a human driver.

For me to accept Tesla FSD it should be several orders of magnitude better than a human driver. I don't want an 'average driver' driving me to work. The average driver is involved in 6 million car accidents per-year.


But an average driver already drives you to work.


Is it not already several orders of magnitude better? The average human only drives a few thousand miles a year and experiences a varying amount of accidents during that time. FSD drives millions if not billions of miles a year, and lifetime accidents are still in double or low triple digits since being released in 2012 (I think). A far cry from 6 million per year.


Has any neutral party compiled the data? It would increase our confidence in FSD.


> However, there have been a few high profile accidents recently with emergency vehicles

Interestingly even that's a bit spun. There was one recent accident, in August. And the NTSB going back discovered that there's a cluster of (I think) 11 others where the car behaved similarly and appeared to strike an undetected emergency vehicle without trying to avoid.

Now... that's interesting, and potentially represents a bug worth fixing. But it was also rapidly pointed out that (1) this is an extremely common kind of accident (human drivers hit these things all the time too) and (2) given the number of reported Tesla AP miles driven and assuming no other unreported collisions cases, Autopilot is actually about twice as safe as the average US driver vs. stopped emergency vehicles.


The issue of “per mile” is complicated though, it really depends on the mile you’re driving. A mile on I-280 is not a mile in SF, and I wouldn’t be surprised if it’s a lot safer than a human on the former but not the latter.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control

There’s no evidence this is true and what evidence exists is more suggestive of the opposite.


> From everything I've heard of Tesla self driving is that per-mile it is more safe than human control

So far you've only heard this from Elon Musk, who is a serial liar and hype machine. Wait until you've heard it from a trustworthy third party with access to internal Tesla data.

Just from the total number of accidents, I don't see any way for FSD to be safer than human drivers. It's likely not to even be within two orders of magnitude. Human-caused fatalities are in the order of a billion miles driven, while Tesla seems to be hundreds of thousands or millions.


Is Tesla still comparing essentially 'highway miles driven' with their self-driving to 'all miles driven in all situations' for regular human controlled autos?


Not just that. It is "highway miles driven in a high safety rated premium car by a younger set of drivers on better roads in better weather"


Yes. And if you look at even the rosiest FSD beta videos online you know it's nowhere near as good at driving as a human, yet people always come into these threads talking about how if it's better than humans it should be used and ignore that the premise is false.


Of course, it depends which human you are talking about. I've been hit three times by human drivers in the past 18 months. Twice while stopped.


i made this argument a few weeks ago on HN, and was promptly downvoted by anti self driving shills.

I knew a girl (in the sf bay area) who hated self driving cars. She said people should just learn to drive better. While at the same time, you go to the DMV and people are on their phones in the testing area getting the test answers. Anybody with a CA drivers license should be audited by an out of state driving agency


I'm very pro self-driving cars (and Tesla), but there isn't data supporting the claim that Tesla's FSD system today is better than normal drivers. Elon has made some imprecise claims and never provided the data to back any of them up.

I do think it's reckless to not publish their data and allow public discussion of the tradeoffs. Tesla is subjecting everyone on the road to their interpretation of the data that these systems are ready and just saying "trust us".


100% agree that I’d love to see them release more raw data. It’d be fascinating to dig through.

I wonder if they have some internal cultural scarring from the TSLAQ debacle. People were weaving (now verifiably false) damaging stories about sales rates by flying private planes over parking lots. More recently, the media was claiming a Tesla ran into a tree on autopilot… until a regulatory agency tested it and concluded that couldn’t possibly have happened.

I could see the thought inside Tesla being “any data we could possibly release would be twisted and used against us; therefore, why bother?”


They're taking a calculated risk for sure and it may pay off. I just fundamentally believe that everyone else on the road should have a say in what self-driving systems are allowed to share the road with them. It's no different than requiring vehicles to have mirrors, brake lights, tires with >2/32" of tread, and everything else that we require to drive on public roads.


People aren’t consistent though. The argument for vaccine mandates is 100% consequentialist and more or less identical to the argument for pushing forward with self drive: net fewer people will die. The antivaxxers are deontological, arguing that a few vaccine reactions and deaths should halt the whole thing.

Most of the SF folks you are describing lean toward the pro-vaccine-mandate position. They should therefore also be for self driving cars.


There's also the factor that Musk is a high-profile 0.01%er who tweets a lot.


I find it hard to square those company reported stats with the extremely common experience of your tesla trying to drive through a red light or similarly insane maneuver.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: