Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is broadly the reasoning behind the fear in the linked article. But it's... really it's just fear. All the stuff you posit is something that can be tested and analyzed. Thousands of people have been driving this stack around since March with no accidents. At some point data will win, right? Have you worked out in your head when that will be?

> I don't think we understand the human factors involved

Do we? Because human factors responsible for literally every traffic accident in history, and those don't seem as scary to you. Basically: it's fear. It's just fear. And it's reaching a peak right now (and being deliberately pushed, frankly) just as Tesla is reaching a public rollout.

And why? Because there's another fear at work here too. If this rolls out and doesn't kill people. If it turns out that it actually is safe, then a lot of people across a lot of industries stand to lose a lot of money. Stories like this are, fundamentally, just marketing press hits. The public FSD beta rollout in some sense marks the last chance to kill Tesla's autonomy product before... it wins.



By human factors I mean people's interaction with and dependency on technology. As example say Tesla FSD works great on the CA highways and people start to depend on it. Maybe they starting scrolling through instagram on regular basis on their commute. Then they decide to go to Mendocino up whatever that super twisty and scary road is called. The put on FSD and the Tesla does a great job. They start to depend on the Tesla to do even the more technical driving. So they stop paying attention even on the harder driving sections.

Six months down the line they are using the FSD drive on a twisty section of road. The sun is the eyes of vision based FSD and there are a bunch of cyclists in the road. The FSD hasn't been trained on this case (or hits a bug or whatever) and doesn't detect the cyclists. The driver who has been "Trained" by the FSD to not pay attention is fucking around with their phone. The tesla plows into the cyclists.

This is my fear - the fear that you detrain people on the responsibility of driving. I suppose at the base of this is I have very low trust in how people will reason about their role with technology. People who don't know about the limitations of the machine will assume it is accurate. People will work to exploit the limitations (two hands on the wheel, eye tracking, etc).

If this technology was marked as "crash avoidance" technology I'd be much less worried. To take the twisty road example, imagine if the Tesla can not only self drive the car to a safe point in case of drive incapciation - that's pretty awesome but also puts the clear responsibility on the human that they are doing the driving.


> Then they decide to go to Mendocino

Again, that's a multi-mode failure. You're assuming that (1) edge cases must exist somewhere where FSD is unsafe, (2) that drivers will find them, and (3) that when they do, the existing supervised beta paradigm won't work. In isolation any of those can happen, sure. But in combination they're vanishing.

Broadly you're doing the luddite/reactionary thing and demanding perfection: as long as you can conjure a scenario in your head where something "might" happen you want to act as if it "will" and demand that we refuse to implement new solutions.

But the goal isn't perfection and never has been, for the simple reason that the existing solution also sucks really bad. Real human drivers in Mendocino with the sun in their eyes hit bikers too!


To be a bit more obvious: I am afraid. I am not telling other people to be afraid. But I am very curious about the general public's view of this technology.


> At some point data will win, right?

Nope. If you test your software and it shows no bugs, I'd be more worried that the test was flawed. Underneath the shiny bits we developers know how buggy software really is. Its just a fact that the smartest humans keep introducing bugs in software. The more complex the software the more complex the bugs and difficult to find.

> Stories like this are, fundamentally, just marketing press hits. The public FSD beta rollout in some sense marks the last chance to kill Tesla's autonomy product before... it wins.

Nobody owes it to Tesla to give them free publicity or be forced to immediately accept their ideas. Personally, I think its makes more sense to invest in tech that reduces dependence on needing cars in the first place.


> thousands of people driving this stack since March with no accidents

Don't these people regularly have to intervene and take over, hence the no accidents? Unless I can responsibly cede full control to the self driving system it's less than useful for most drivers imo. If I'm wrong let me know but that is the impression I have taken away reading on the tesla self drive system.


That's right, this is still beta software and the users are still responsible for supervising. "Less than useful" isn't the standard, though. SF and the poster above aren't making an argument about consumer protection or lemon laws or anything. They're claiming a safety problem.

And there's no evidence of a safety problem.


How many accidents would have happened if those people would have done all of that driving manually? If you count accidents that FSD would have caused without a human driver, then you have to also count accidents that human drivers would have caused without FSD.


Yeah, besides Tesla has been collecting data for like a decade now, I'm sure they have enough of it to make a robust enough system for the US at large, especially with their Dojo training chip.

At the end of the day it just needs to be better than the average human, and that's frankly not the highest bar unfortunately.


>At the end of the day it just needs to be better than the average human

This comes up a lot on techno-centric forums like HN but I think it misses an important point. It’s not necessarily if it performs better than the average human that will lead to policy that allows it on roads. It has to have a high degree of trust in the public’s eye. That bar is likely much higher than “just better than the average human.”


Afaik the full self driving is not better than the average human, not even close. The reason for the lack of accidents are the human drivers intervening when the system suddenly behaves irrationally, which is often. But if the FSD was left entirely to its own devices there would be a huge uptick in everything ranging from fender benders and fatal collisions.


That's a strawman though. The argument upthread and in the linked article is that the supervised FSD Beta is presumptively unsafe.

Arguing that full autonomy isn't possible yet (duh, I mean, it says "beta" right there in the name) isn't an argument to disallow testing.


The difference is: you can understand a drunk driver, you can arrest them, and possibly feel some sense of "closure".

What are you gonna do when a car goes rogue and murders somebody?


When a drunk driver causes an accident, you can arrest them and keep them from driving anymore, but this doesn't stop anyone else from driving drunk. With self-driving cars, you fix the bug that caused the accident and deploy it to the whole fleet, so none of them will ever make the same mistake again.


That's a new one to me. You're arguing that we can't allow autonomous systems in our society because when they have bugs that cause problems... there isn't a moral actor who we can punish?

Do you feel the same way about your gas furnace or the autopilot of the last airliner you flew on?


>> At some point data will win, right? Have you worked out in your head when that will be?

According to a Rand corp. study I quote in another comment, it will take millions or billions of miles and decades or centuries before we can know self-driving cars are safe, simply from observation:

Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

https://www.rand.org/pubs/research_reports/RR1478.html

That's statistics, btw, not fear. Data, like you say, will "win" at some point but that will be far in the future. Until then we have a few thousand systems whose safety we can't measure but are available commercially (and sold under pretense of improving safety).

Note also that before Tesla started selling cars with "autopilot" there was no way to know how safe it would be, in advance. That is not the behaviour of a company that cares at all about safety, other than as a marketing term.


That Rand paper is sort of a joke. It's positing an idea of "certainty" we don't apply to any other industries anywhere. The same logic would argue that human drivers are not known with certainty to be safe, that industrial accidents are not proven not to happen, that airline autopilots aren't known with certainty to be safe, etc..

The goal isn't certainty. It's "better". It seems like it might be "better" already.


The Rand study is not about "certainty", but about determining the safety of self-driving cars and the number of miles needed to do it. Are you confusing the colloquial meaning of "certainty" (as in "I'm sure about it") with the meaning of "certainty", and particulary uncertainty in probabilities and statistics?

Is there some other published work that you prefer over the Rand study?


"Certainty" appears in the abstract! But yes, the article is talking about the work required to derive a 95%-confident result of a specific improvement. But that's backwards, and not how one does statistics. You measure an effect, and then compute its confidence.

And it's spun anyway, since most of those "trillions of miles!" numbers reflect things like proving 95% confidence of 100%+ improvements in safety. When all we really want to know to release this to the public is 95% confidence of 0% improvement (i.e. "is it not worse?").


>> But that's backwards, and not how one does statistics.

Are you saying one cannot use statistics to make predictions about future events?


I'm saying that to make good decisions, you need to make predictions about the right future events. And Rand is pontificating about the wrong ones.


Where is it that they are "pontificating"? They're calculating how long and how far self-driving cars need to drive before we can accept they're safe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: