Hacker Newsnew | past | comments | ask | show | jobs | submit | atleastoptimal's commentslogin

On net, Waymos are safer than human drivers. Really all that matters is deaths per passenger mile, and weighted far less, injury/crash per passenger mile.

Waymos exceed human drivers on both metrics, thus it is reasonable to say that Waymos have reduced crashes compared to the equivalent average human driver covering the same distance.

Mistakes like this are very rare, and when they do happen, they can be audited, analyzed with thousands of metrics and exact replays, patched, and the improved model running the Waymo is distributed to all cars on the road.

There is no equivalent in humans. There are millions of human drivers currently driving who drive distracted, drunk, recklessly, or aggressively. Every one of them who is replaced with a Waymo is potentially many lives saved.

Approximately 1/100 deaths in the US are due to car fatalities. Every year autonomous drivers aren't rapidly deployed is just unnecessary deaths.


>> Really all that matters is deaths per passenger mile, and weighted far less, injury/crash per passenger mile.

That's not exactly right. You need to take into account how likely it is for accidents to happen, not just the number of miles travelled. If the low probability of accidents is taken into account it turns out it takes many more millions or even billions of miles than already travelled for self-driving cars to be considered safe. See:

Driving to Safety

How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability.

https://www.rand.org/pubs/research_reports/RR1478.html


It just looks really bad when that one in a million death is caused by something stupid a human would never do. So far these Waymos are only replacing taxi and Uber drivers, which have a lower rate of accidents than the general population.

"Uber reported 0.87 fatalities per 100 million vehicle miles traveled (VMT) in 2021–2022"

https://insurify.com/car-insurance/insights/rideshare-driver...


Taxi drivers or bus drivers are also much safer than regular people if you interpret it like that

Which is also true

Actually, your assumptions that human drivers cannot be improved are wrong. Modern cars have a lot of safety features to help avoid accidents:

* lane keeping with optional steering

* pedestrian and obstacle detection at the front

* pedestrian and obstacle detection when reversing with automatic braking

* assisted driving with lane keeping and full stop / driving on in case of traffic jam

Waymos are just fancy taxis. And taxis haven’t replaced all human drivers or solved traffic accidents.


So what will these humans alternatively perish from? Old age? How is that fiscally possible?

Probably heart disease

> Approximately 1/100 deaths in the US are due to car fatalities. Every year autonomous drivers aren't rapidly deployed is just unnecessary deaths.

You could improve driver training. American drivers are absolutely terrifying.


“Could” is doing a lot of work there I think.

I suspect most problematic American drivers already know they aren’t supposed to text, drink, or watch or record TikToks while they drive, but simply do it anyway because they are aware these laws are under-enforced.


The only hard evidence (besides the story about the Russian woman marrying the guy in Aerospace) this article offers is some guy getting a few LinkedIn requests, and two Chinese women trying to get into a conference. There is nothing specific about Chinese/Russian spies seducing normal silicon valley tech workers or marrying them for trade secrets.

Also female chinese spies stealing tech secrets is a running joke in SV.

Common sense thinking wins again. The entire genesis of an allergy is your body treats a benign particle as a pathogen due to not recognizing it. The #1 way to precipitate this is to keep the body from ever encountering this particle until well beyond its initial phases of immune development.

Are there other modern conditions born from the same "zero-tolerance prevention leads to unintended consequences due to failing to provide the body a robust means to develop"?


> Common sense thinking wins again.

This is not even remotely common sense. E.g. why can this allergy be desensitized via ingestion but not via skin contact?

Hindsight is 20/20. In this case, it was figured out after a lot of scientific research.

Just because we can understand it now doesn't mean it's "common sense". It's very much the opposite, and you discredit the scientific research this has required.


Where I come from this was a widely held belief by the end of the 2000's: If you raise a child in an overly sterile environment and/or feed them a very limited diet, they are much more likely to develop a bad immune system and allergies. It was also believed that this idea came from science, but I guess not?

Here's an early preview for the next bombshell of this area. Breastfeeding is extremely beneficial. "Infant formula" should not be the main thing a baby is consuming.

To me it discredits science a lot more when things like this are treated as arcane or brand new knowledge. It's good when we can lock in reasoned beliefs as definite fact, instead of just reasoning which is often incomplete or flat out wrong. But when it's right and people act like this about it, it just makes it look like "scientists" know less about the world than my grandma, and that my grandma would make better calls on national health policy than the people currently in charge. Obviously that's not the case but I wouldn't be unjustified in thinking that during times like this.


That's hardly a bombshell since it's common knowledge.

Baby formula ads in the UK are even required to say that "breast is best" type language. I assume it's similar in most countries.


It is important to memorialize and standardize "common knowledge", as without memorialization, knowledge drift can cause a loss of the "commonality" intrinsic to the knowledge.

I don't know that I agree this is the reasoning, here? Seems far more likely, to me, that there are other environmental factors at play.

I'm almost certainly indexing too heavily on the ideas in birch pollen cross reactivity. But I see basically no reason not to think that same process generalizes quite well into a lot of the things we used to gladly pollute into our environments.

And yes, I know we can still get better at pollution management; but I think people should probably acknowledge just how much progress we have made. Especially in the US. Our air quality is amazingly clean today compared to just 60 years ago. Strikingly so.


Maybe drug policy and the opioid crisis?

> The entire genesis of an allergy is your body treats a benign particle as a pathogen due to not recognizing it

And why the body does not recognize it ? Because it is tainted. Putting all kind of pesticides and other substances on plants does modify the "original".


[Citation needed] for a pesticide connection.

Unless you are an insect, most pesticides are harmless for you. And for those that are not, nobody has proved a connection to allergies.


Perhaps those who make assertions based on "common sense" instead of evidence are pests, and thus pesticides are effective against them?

A problem with anti-AI discourse is there are three seperate groups who rarely communicate, and if they do they just talk past each other

1. Rationalists/EA's who moderately-strongly believe AI scaling will lead to ASI in the near future and the end of all life (Yud, Scott Alexander)

2. Populist climate-alarmists who hate AI for a combination of water use, copyright infringement, and decimation of the human creative spirit

3. Tech "nothingburgerists" who are convinced that most if not all AI companies are big scams that will fail, LLM's are light-years from "true' intelligence and that it's all a big slop hype cycle that will crumble within months to years. (overrepresented on this site)

Each group has a collection of "truthiness-anchors" that they use to defend their position against all criticism. They are all partially valid, in a way, but take their positions to the extreme to the point they are often unwilling to accept any nuance. As a result, conversations often lead nowhere because people defend their position to a quasi-religious degree rather than as a viewpoint predicated on pieces of evidence that may shift or be disproven over time.

Regarding the satire in OP, many people will see it as just a funny, unlikely outcome of AI, others will see it as a sobering vision into a very likely future. Both sides may "get" the point, but will fail to agree at least in public, lest they risk losing a sort of status in their alignment with their sanity-preserving viewpoint.


There are quite a few people that are a combination of 2 and 3. It is perfectly reasonable to both dislike a company and also believe that it’s run by charlatans.

The problem is there are many people who think AI is a big scam and has no chance of long-term profitability, so a fund would be a non-starter, or people who think AI will be so powerful that any paltry sums would pale in comparison to ASI's full dominance of the lightcone, leaving human habitability a mere afterthought.

There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.


There's a difference in scale and potential consequences though

What if there were some robot with superhuman persuasion and means of manipulating human urges such that, if it wanted to, it could entrap anyone it wanted to complete subservience? Should we happily acquiesce to the emerging cult-like influence these new entities have on susceptible humans? What if your parents messaged you one day, homeless on the street because they gave all their money and assets to a scammer robot that via a 300IQ understanding of human psychology manipulated them to sending all their money in a series of wire transfers?


> superhuman persuasion and means of manipulating human urges such that, if it wanted to, it could entrap anyone it wanted to complete subservience

Wow this potential/theoretical danger sounds a lot like the already existent attention economy; we're already manipulated at scale by "superhuman" algorithms designed to hijack our "human urges," create "cult-like" echo chambers, and sell us things 24/7. That future is already here


Which is true. People are scammed by very low-quality schemes (like that French woman who sent 1 million to the scammers who claimed to be Brad Pitt who for some reason needed money for medical procedures).

Humans have generally a natural wariness/mental immune system response to these things however, and for 99% of people the attention economy is far from being able to completely upend their life or send all their money in an agreement made in bad faith by the other party. I don't see why, if some AI were to possess persuasion powers to a superhuman degree, would be able to cause 100x the damage when directed at the right marks.


I think he is bearish about agentic workflows because he works at the very highest level of coding. An agentic Karpathy is a few doubling cycles beyond an agentic junior engineer. Agents (or just LLMs on a loop that correct their errors) are very reliable for less complex tasks now, and theyre still getting better at an exponential rate.

We are still on trend by projections to reach human parity in many domains by 2027-2028, the only thing that would prevent this is a major unexpected slowdown in AI progress.


something that replaces humans doesn’t need to be 99.9999% reliable, it just has to be better than the humans it replaces.

But to be accepted by people, it has to be better than humans in the specific ways that humans are good at things. And less bad than humans in the ways that they're bad at things.

When automated solutions fail in strange alien ways, it understandably freaks people out. Nobody wants to worry about if a car will suddenly serve into oncoming traffic because of a sensor malfunction. Comparing incidents-per-miles-driven might make sense from a utilitarian perspective, just isn't good enough for humans to accept replacement tech psychologically, so we do have to chase those 9s until they can handle all the edge cases at least as well as humans.


Waymo has been growing rapidly. It still makes mistakes, but leas often than humans, and its riders are willing to accept the trade off given the benefits.

All of Errol Morris' first person documentaries are good https://www.youtube.com/watch?v=VrZ_xn1QQHI&list=PLVmRJGCDzW...

It's been trivial to produce any kind of normally banned output with the API's themselves for years.

Producing actual erotica on your gmail-connected ChatGPT account seems like an idiot filter


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: