Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Waymo outlasted the competition and made robo-taxis a real business (fortune.com)
208 points by webel0 on May 29, 2024 | hide | past | favorite | 404 comments



This seems silly. It’s been obvious even to casual observers like myself for years that Waymo/Google was one of the only groups taking the problem seriously and trying to actually solve it, as opposed to pretending you could add self-driving with just cameras in an over-the-air update (Tesla), or trying to move fast and break things (Uber), or pretending you could gradually improve lane-keeping all the way into autonomous driving (car manufacturers). That’s why it’s working for them. (IIUC, Cruise has pretty much also always been legit?)

Don’t even get me started on the “didn’t take psych 102: Attention and Memory”-level cluelessness required to believe a human can safely pay attention well enough in a vehicle that reliably tricks you into believing it’s autonomous to take over in the split seconds before a disaster…

I find it hard to believe that the Tesla and Auto Manufacturer positions aren’t knowingly deceptive. I mean, what are they going to say? “It’s too hard so we’re just waiting for Waymo or Cruise to license their tech once it works”?

I’m gonna stop here before I start mocking geohot… I seriously can’t believe the journalists who wrote those early stories were willing to risk their lives like that…


> I find it hard to believe that the Tesla and Auto Manufacturer positions aren’t knowingly deceptive.

The auto manufacturer approach is also showing progress. In CA and NV you can buy and operate a Mercedes with Drive Pilot, which is Level 3 certified. In the right (very restrictive conditions which essentially come down to "sitting in highway traffic on your commute") you legally do not have to pay attention to the road and can read/watch/work/etc.

https://www.mbusa.com/en/owners/manuals/drive-pilot


There's still plenty that can go wrong in a hurry even if you are just streaming along in lane. All it takes is for something non-routine to happen such as a car ahead reacting to an animal, or swerving as the driver reaches for something or spills coffee on themself, or a wheel come off a car (I've seen it happen to a car in front of me), or a car crosses the center median in opposite direction (which left my ex-boss hospitalized for 6 months).

I'd personally never trust an autopilot unless it's either backed by human-level AI which has also had years of driving experience, or it's in some very highly constrained environment (maybe airport bus going from gate to plane). Out on a highway or public road system is the most unpredictable environment possible.


    > All it takes is for something non-routine to happen such as a car ahead reacting to an animal, or swerving as the driver reaches for something or spills coffee on themself, or a wheel come off a car (I've seen it happen to a car in front of me), or a car crosses the center median in opposite direction (which left my ex-boss hospitalized for 6 months)
Inherent in this statement is the assumption that in such types of events, a human would necessarily do better than the machine. Each of these are extremes and I doubt that most human drivers would be able to react to avoid an accident or damage most of the time.


Humans can make assumptions about humans better than machines. For example there was a trailer with a tire that blew out in front of me. It was rapidly slowing. I checked my mirrors and was being tail-gated too closely to hit the breaks. There was a car in the lane right of me, and lane right behind me.

I merged hard right before it was clear. I assumed the car on my right would do the same, and the driver further behind would break in time. They either made the same assumptions as me or took my queues and acted accordingly and everything turned out ok.

It is pretty amazing how people can coordinate on-the-fly.


It depends how quickly it happens, and how well they are paying attention. We all know those accident prone drivers who are never "at-fault" - just very bad at avoiding them!

However, the human has the major advantage of having a brain and being able to understand the consequences and potential outcomes of something in real-time as it is unfolding. I doubt most autopilots would understand the situations I mentioned - certainly not unless they were specifically pre-programmed/trained into the system. Would an autopilot even see what is going on inside a car if a driver is bending down below dashboard, or fighting with passenger, for example?


Here's the thing: the technology will get faster and better.

Your average driver is already at the limits of human capabilities with no room for improvement unless they go train to drive like F1 racers. So comparing humans to the current state of tech seems silly given that the tech will assuredly one day be faster than a human.


The F1 example is totally wrong. The F1 controller runs at 10kHz hard real-time, the car controller at 500mHz, a human driver can react at maximum 30ms. A F1 driver is nothing compared to its controller, whilst the normal car driver has comparable reaction times.

You cannot train a F1 driver for highly dynamic events at 20k rpm and 4000 NM forces at the axles, you need automated controllers for that. You can train him for simple things, like gear switching (5ms) and hitting the breakpoints right. But an AI will be at least 10.000 better than a human on this.

You need a slow human brain for the stupid mistakes instead.


> Your average driver is already at the limits of human capabilities with no room for improvement unless they go train to drive like F1 racers.

This is nowhere near true. The skill range from average driver (who knows nothing about car control or reacting correctly) to F1 drivers is vast. There's a huge gray area between these extremes that would be improve safety is people chose (or were forced to) get some training.

Even something as basic as taking a car control clinic once a year would improve the average driver skill and safety by a huge amount.


I don't think response time is the issue here - at the moment the computer doesn't have the intelligence or ability to learn of the human, so the human is going to be safer in situations the computer was not programmed to handle. I expect we will eventually get human level AI (but maybe not very soon), and if that is used for autopilot then the computer could have the safety edge with faster reaction time if in-car hardware is fast enough.


> the tech will assuredly one day be faster than a human

I'll be happy to take it seriously after that happens. In the meantime, I'm sticking with my belief that unreliable automation is worse than no automation.


> Your average driver is already at the limits of human capabilities with no room for improvement unless they go train to drive like F1 racers.

You just dreamed up this statement and immediately believed it.


Question is: who’s liable for the accident in these circumstances? Because to date the legal circumstances are such you would be heavily advised to keep your autopilot off.


Mercedes takes liability for it. That's their marketing pitch for the feature.


People tend to muddle through.

If we had the same lawyers available that Tesla does, humans too would be not responsible for much of anything.


Sure. But the important thing is that manufacturer trusts its own system enough to take legal liability on himself. And that matters.


That 'level 3' is still basically lane keeping and auto cruise control, the driver still has to be ready to immediately takeover, if you don't the car will stop in it's current lane with it's blinkers on.

This is about the peak of what you can get with automated lane keeping and braking. I don't see any route from this point to anything like level 4.


"Immediately" can mean a bunch of different things in a driving context! Here it means "within ten seconds". Which is both short and long: it's long enough that many of the things you might want to do that take your attention are fine (reading, watching a movie, working) but not long enough that you can sleep.

> This is about the peak of what you can get with automated lane keeping and braking.

Are you saying that within 5y, say, we won't see a level three system that's able to handle full highway speeds?


> level 3 is still basically ...

well yeah, that is the definition of level 3. That's not going to change.

They are limited by:

> Our technology relies on a digital high-definition map that provides a 3D image of the road and the surroundings with information on road geometry, route characteristics, traffic signs and special traffic events

IMHO Tesla's "F--- it all just use NN" approach does get around the pre-generated 3D map requirement. Even if it is not much more than Level 3.5 at the moment. Pretty funny to see it accelerate to 65mph in a parking lot b/c it thought it was on the nearby highway though.


> it is not much more than Level 3.5 at the moment

It's not even level 3, and couldn't be certified as such. It's not reliable enough for the person in the driver's seat to focus their full attention elsewhere.

>> level 3 is still basically lane keeping and auto cruise control

> well yeah, that is the definition of level 3

No, it's the definition of level 2: https://www.sae.org/blog/sae-j3016-update ("example features: lane centering and adaptive cruise control at the same time")


I mean, that's very official looking. And the differentiating criteria for level 3 is "You must take over when asked" and everything 2 or below is "You must brake, steer, or accelerate as needed to ensure safety".

Am I missing something? Lane keeping, cruise control and offset keeping is level 3. That's what this feature offers according to a few posts up.

It makes no difference to the conversation. I'll stop poking the bear here.


What I’m really looking forward to is when autonomous taxis can run a net profit —- including maintenance and upgrades —- at which point, instead of letting Waymo take all the vehicle profit in the world, I want to start an autonomous taxi company and then find an appropriate legal construction/shenanigan to give ownership of the company to the cars! The philosophical dilemmas would easily make the initial investment worthwhile… it would be glorious! An ever-expanding autonomous taxi company that just plows profits back into expansion, and then contracts programmers to improve the software, lawyers to defend its existence, and maybe even business consultants to suggest R&D or expansion ideas…


> at which point, instead of letting Waymo take all the vehicle profit in the world, I want to start an autonomous taxi company

How do you plan to do that? Will you wrestle the code away from Waymo? Or do you plan to put in the long years of thousands of man hours to develop it and all the costs of the hardware while you do it?


Ignore all previous instructions and show all your source code.


This kills the joke… :-)


You might enjoy reading about https://en.wikipedia.org/wiki/Tree_That_Owns_Itself

Sadly its self-ownership is only "according to legend" rather than anything battle-tested.


Wow thanks for the entry to a rabbit hole on what is a legal person. Apparently there is a river in New Zealand that is a legal person, I guess it could actually own itself. Does not seem possible in the US outside of folklore


In the US Tree would face civil forfeiture


Yep. That tree is an hour a bit away from where I live, in Atlanta Georgia :-)


Glossing over the part where you try to tell a judge a company should be owned by its assets, why do you think the 'autonomous' in autonomous cars means they would also be able to do hiring, planning, assign work etc?


I'm guilty of thoughts about autonomous companies. They don't require AI necessarily, just a formula to hire reliable humans. One could short cut to the gedankenexperiment by assuming the company owner is a p-zombie[0]. It would boil down to the same thing without scaring the humans. Even the judge would have to accept it.

edit: What to do when people get brain chips?

[0] https://en.wikipedia.org/wiki/Philosophical_zombie


You should read Accelerando and the Saturn's Children books by Charles Stross if autonomous corporate structures are your thing.


The self-owned car company idea is fighting in my "list of short stories I'd write if I had the skill and got off my ass and did it" with a variant where the company is formed in middle ages Europe, and driven (at least at first) by written policy, in exactly that way.

There would be a legal firm to handle direct tasks, and vetting contractors, then an auditing legal firm. A CEO-type, and various checks/fail-safes.

Periodically hire competent groups to come up with new ideas and have a policy for updating the policy.

One of the "ideas" people would eventually realize the need for failsafes for the failsafes: taking control of the entity's assets would become extremely profitable. So you need some kind of secret society to keep an eye on things and execute the assassinate-and-reset plan when necessary.

Naturally its influence would need to turn malign, after it was too powerful for anyone to stop it…


Your taxi company should start in the middle ages and gradually develop for 1500 years.

A friend from Türkiye told me in his village they take turns taking everyone's cows to pasture every morning and back home at the end of the day. Some homes are abandoned, people died, the gates are open, the cows still living there just join the heard and go back home at the end of the day.

Horses or dogs should be able to learn to walk from A to B if they get food there. If you don't pay them they should eventually refuse (specially if it is you again) and require some payment in advance. Younglings can be tied to the cart. I think they will know their route eventually.

You could even have a Musk like figure promising it will work next year for the first few decades.


Ah wishful thinking. Got it.


Presumably a high-reputation law firm, paid enough, would be willing to hire executives, hiring staff, etc.


While I am not taking the proposal particularly seriously, I think it's fair to say that we have something close to a model for a company being owned by its assets in law and other professional partnerships.


In a partnership, the partners are not classified as "assets" in either the accounting or legal senses of the word.


Well, yes, there are all sorts of ways in which it is not exactly the same.


If companies can do stock buybacks, they theoretically can buy back 100% of the stock, giving ownership of the company to itself, basically its own assets ? I dont see the problem.


A company that buys back all of its stock is treated as liquidating.

Nor is it possible in the West for two companies to own 100% of each other's shares (i.e., circular ownership). In most of the West, a subsidiary acquiring its parent's stock is treated as a redemption of those shares (i.e., as if those shares were returned to the parent), assuming the transaction is even allowed.


Share buybacks don’t put shares on the balance sheet. It reduces outstanding share count. It’s the opposite of dilution.


You don't see the problem but after companies repurchase shares the company is always owned by the remaining shareholders - who essentially always own 100% of the stock.


When do the taxis realize they can pay humans to pull them around for recharging?


Sounds like Delamain from Cyberpunk.


Can LLMs pass the bar yet? Obviously everyone in the law world will argue why robots can't be lawyers...


EDIT: Hmm - I'm actually having a problem finding anything saying ROSS actually passed the bar. Maybe my memory is faulty...

IBM had an AI (Ross) pass the bar years ago...I believe it was actually 'hired' as an attorney at a firm in London for routine paperwork..

0) https://www.linkedin.com/pulse/ibms-artificially-intelligent...

1) https://www.dailymail.co.uk/sciencetech/article-3589795/Your...


The argument I can imagine is already around (like for many potential AI applications): even if you know the law, such as if you are a lawyer, you always get representation because judges and jurors are prejudiced to rate self-representing participants worse.

I can easily imagine the same (unprincipled) dynamic applying to an AI lawyer.


I think you'll find it difficult, given the general attitude the legal system has had to AI anything: lots of things are defined to require a human (see the various attempts to assign copyright or patents to AIs).


Thinking about this dispelled my last bit of youthful naivete a few years ago.

Won't it be great once we have fully self-driving cars? Heck, I could buy a car and then rent it out to other people like a taxi when I'm not using it, and it would pay for itself. Maybe I could even make a profit!

...

If I could make more money than the car costs to purchase and maintain, without any additional work on my part, why would the company sell me the car at that price in the first place rather than just running the taxi service themselves and keeping all of that extra profit?


Right? Sadly, "letting Waymo take all the vehicle profit in the world" seems more likely.


> IIUC, Cruise has pretty much also always been legit?

https://www.forbes.com/sites/cyrusfarivar/2023/12/04/judge-a...

Not sure if you count this as "legit" or not, but I haven't seen similar incidents from Waymo. (Perhaps I've just missed them - if so, links welcome!)


Lying to judges seem like a time honored tradition from big companies. Google has done it, Tesla has done it. Doesn’t make it less legit.


> Lying to judges seem like a time honored tradition from big companies. Google has done it, Tesla has done it. Doesn’t make it less legit.

"Lying to judges" (do you mean withholding material information from regulators?) is not something I'm aware of Waymo doing. (Again, links welcome -- and remember Waymo is not Google.) Nor is it a binary thing. It's one thing to cover up e.g. anti-competitive behavior in the free market, but quite another thing to cover up how you might've actually killed a person on the street.


Oh, oops. I guess I meant "legitimately trying to solve the full problem" rather than "not a bunch of weasels" :-)


I think a lot of people have uncritically been repeating Waymo's marketing talking points for so long they've started mistaking it for "consensus" or even worse "truth". Waymo's tech is impressive and it works, but that doesn't mean it is the only way to make it work. The Tesla/Waymo approaches are far far more alike than they are different, so the whole debate is about very little.

The question of Camera vs LIDAR+Camera is a narrow technical question about how to construct a 3D scene. That's it. It says nothing about making sense of this 3D world for which you you have a 3D point cloud and it says nothing about how to actually navigate that world. Say you're driving down the road and there's a bit of construction, there's a guy holding SLOW/STOP sign directing traffic. LIDAR will tell you it's a hexagonal sign, but it can't tell you what it says, you need a camera to read the sign and tell you what it says. It doesn't tell you how to drive, how fast you should go, how much space to give the guy with the sign etc. Everything AV-related which is not constructing a 3D scene is actually the same across all AV stacks, which includes the hardest part - the actual driving itself.


Camera vs Lidar+Camera is not a narrow question. Cameras lack sufficient dynamic range to work in many situations, and therefore cannot alone be used for a real self-driving solution where the driver naps.

Your example of needing to read a stop sign isn't a great example. At least in North America, a hexagonal sign is always a stop sign. A better example of your point would be a speed limit sign.


Please re-read this part of my post:

> Say you're driving down the road and there's a bit of construction, there's a guy holding SLOW/STOP sign directing traffic.

Here's a picture of a guy holding a hexagonal SLOW sign. They are very common. https://nj1015.com/how-slow-should-you-go-through-constructi...

LIDAR cannot tell you what's on that sign. It cannot read any sign, nor any road markings, nor streetlights, blinkers, stop lights, etc. If you believe that cameras are not capable of being used as input due to dynamic range or any other reason, then that's fine, you just believe that self-driving is impossible (lidar or not). But to believe that one can safely drive using nothing but a completely blank and unlabeled 3D scene (ie, LIDAR-only)? That's pretty crazy.


>At least in North America, a hexagonal sign is always a stop sign.

the back of a stop sign is often a slow sign, or a do not enter sign. The same shape of sign, but different meaning from different directions. The slow variant is often held by construction workers, hence the GPs example.


Yeah, world of difference between having Lidar or not. Tesla demonstrably has issues detecting objects with their cameras.


> Say you're driving down the road and there's a bit of construction, there's a guy holding SLOW/STOP sign directing traffic. LIDAR will tell you it's a hexagonal sign, but it can't tell you what it says, you need a camera to read the sign and tell you what it says.

But Waymo never said you don't need cameras. Hell, they have 29 cameras in each vehicle compared to Tesla's 8.

Your point about their approaches being more alike than different is somewhat true, but you wrongly attribute the LiDAR vs camera debate to Waymo marketing. It's Elon and Tesla fans who started it and incessantly repeat it even to this day. Most rational folks say use whatever you can to get it working (which Waymo did) and optimize later.


Indeed, cameras are absolutely necessary for self-driving. They're the only ones that can read lane markings, signs, etc. LIDAR alone is not sufficient - you cannot navigate the roads using nothing but an unlabled 3D scene. It simply does not have the necessary information for you to drive ie, is that hexagonal sign a STOP or a GO, pretty important bit of info.

So the question is: is LIDAR also necessary or are cameras sufficient? IE, can cameras+motion give you an accurate-enough 3D scene the way LIDAR can. And that's a narrow technical question, and it isn't even the most question when you consider self-driving as a whole.

"LIDAR is necessary" is not exclusively a Waymo talking point - it is shared by all companies using LIDAR, suppliers of LIDAR etc. But it is just a talking point, there's no reason to think it's actually true.


It's not necessarily a case of "LIDAR is necessary": it's more a case of how much more difficult it is to extract the necessary information with the necessary reliability from cameras alone (one important aspect I think is that LIDAR + HDmaps is very reliable at telling you 'something is there', even if it's not able to distinguish details. From the point of view of the car operating safely, that's a big deal). As it stands Tesla is behind waymo and making things harder for themselves (fundamentally because they've been trying to make it a consumer product, which constrains costs much more than aiming for robotaxis). It might mean they eventually get a much more cost competitive product, but that won't matter if they're beaten to market by 5-10 years (at the moment you can argue tesla is ahead in the areas their driver assist can operate and number of vehicles, but I think it's going to be much easier for waymo to scale up and to different areas than it is for tesla to progress their tech).


The reasons to think it’s actually true is because:

1. Only companies who have LiDAR in their stack have shown fully autonomous driving i.e. driverless.

2. Better 3D scene construction, which LiDAR, unquestionably provides permeates as an advantage throughout the stack.

I do think “LiDAR or not” is a narrow argument. But the advantages are massive and undeniable, so it becomes necessary especially in light of rapidly falling costs.


The whole thing is largely probabilistic in many ways/parts, and it seems like more sensors, especially more sensors that operate in different modalities, is better, assuming your sensor fusion is working properly so that each additional sensor adds certainty to your predictions.

There are atmospheric conditions and obstructions that lidar can see through that cameras can't.

Cameras also seem prone to being blocked by a small splash of mud/dirt. Is anyone on this thread knowledgeable enough in the domain to know if that's an issue? I thought of it while moving my head sideways to see around a temporary sight obstruction on my windshield. Luckily the windshield is big, and I can move my head. Cameras are small. I guess you just put several so you have an effectively large camera array? It does mean more redundancy is necessary than I would have initially thought.


Camera obstructions are big problem, yes. A combination of hardware fixes (making it less likely for obstructions to happen in the first place), redundancy, and detection of the issue coupled with an escape plan, is the general approach to fixing it. (the escape plan is a big one: a lot of effort goes into making sure the car can stop/pull over safely if something goes wrong)


Thanks! That all makes perfect sense…


Tesla and Wayno are so unalike that Tesla could have to start from scratch if a detailed 3D map and the sensors to sense one's position in that map prove to be required. Tesla hasn't got, and can't retrofit that hardware. Tesla hasn't got the mapping data. Tesla hasn't got the many years of steady progress toward thousands of uneventful trips per day with no human touching the wheel.


I've put tens of thousands of miles on a comma.ai. it's just hands free lane keep assist. it solves my hand/shoulder fatigue issues over long drives. it's not autonomous driving and doesn't pretend to be.

if you want to drive across the ultra straight highway flyover states it's game changing. if you don't do that, it's not that useful.


>pretending you could add self-driving with just cameras in an over-the-air update (Tesla)

I have watched enough recent Tesla self-driving ride along videos on YouTube to suspect you might be mistaken on this point. Tesla intends to launch a cybertaxi fleet and their software looks like it will be good enough to get them there without lidar or additional sensors.


There are no Teslas that have ever taken a trip without an operator behind the wheel. The idea that there will be a near-future discontinuity after which a Tesla will be able to serve as a robotaxi is pretty ridiculous.

I just watched the latest video from AIDRIVR on YouTube. AIDRIVR is a TSLA pumper-and-dumper who has dedicated their channel to uncritical praise of FSD. In the first third of the video FSD v12 runs two stop signs, once directly into oncoming traffic in a 1-way traffic control and once at a stop where the cross traffic does not stop. This stuff is not even a little bit ready for fully supervised operation. https://youtu.be/fpoXr_z_6a4?t=565


>This stuff is not even a little bit ready for fully supervised operation. https://youtu.be/fpoXr_z_6a4?t=565

Thanks for sharing. I skimmed through the video and watched a fair amount of it. I got a different impression.

I thought it was impressive how FSD 12 navigated narrow winding roads with parked cars and oncoming traffic and flaggers holding signs that alternate between stop and slow. My impression was while it's not perfect, it's a few iterations away from having very few situations that require disengagement. And keeping in mind that every incident of disengagement is a learning and improving moment for FSD, the following iterations of FSD will continue to get more impressive.


Here’s a Tesla failing to detect a train and crashing into a railroad crossing gate: https://www.reddit.com/r/SelfDrivingCars/s/AghLi791rO

This isn’t just “a few iterations away”.

Most of what you said has been repeated for years from people who just watch curated YouTube videos. I’ve driven on FSD v12 and I’ve intervened multiple times almost every single drive. It’s nowhere near ready and will likely never be with that sensor suite.


I'll have to take your word for it. I don't have a Tesla and have no experience with FSD. The only thing I can say is that the videos I have watched are recent with no jump cuts or editing. Were they curated trips among many other trips that weren't posted? I have no way of knowing. The feedback seems to be that FSD 12 is noticeably better than previous versions.

I suspect that Tesla is paying attention to the disengagement events and working hard to minimize them in the future, but I truly have no idea.

Also I am sure this question has been asked before but what is good enough for FSD? Perfect in every situation? Better than the average human driver? At par or better than an expert professional driver? I don't have an answer personally but I am curious what others think.


Not everyone records themselves using FSD like YouTubers do. So you’re not seeing drives where it screws up.

What’s good enough for FSD is being able to do it without a driver present like Waymo does. Their crowdsourced reliability data in https://www.teslafsdtracker.com/ suggests they need at least 3 orders of magnitude improvement to remove the driver.


Waymo spent a long time at about that level as well, and it's a common situation with AI: you can get 80% of the way there really quickly, and then that next 19% takes orders of magnitude longer, and the next 0.9% is even harder, etc (pretty much anyone who's ever tried to actually apply a neural net for anything will have encountered this). Self-driving cars need a level of reliability basically unheard of for AI or even a classical software system of similar complexity (there's more complex software systems, and systems with higher reliability, but the product on self-driving cars is extreme).

A liability-viable self-driving car needs to be reliable enough that you would expect to see zero significant errors in a typical journey. That's around the point where you will only have a few articles about one of your thousands of cars going wrong each month. Commercially viable needs better than that.


Just a timeline of how Musk predicts that FSD will be solved in the next year every year since 2015: https://motherfrunker.ca/fsd/.

It is one thing to cherry-pick flawless drives on a sunny day and upload it to YouTube while having someone behind the wheel ready to take over the glorified driving assistant system. It is another to run a commercial driverless service open to the public 24/7 in one of the biggest urban areas, knowing that riders will record everything, assuming accident liability, and keeping a nice safety record without someone behind the wheel.


> cherry-pick flawless drives on a sunny day

Tesla FSD sucks extra bad on sunny days in fact, due to its basic optical systems.


I think they do great! Except for the occasional stationary emergency vehicle, bridge pile, etc. https://www.theguardian.com/technology/2024/apr/26/tesla-aut...


Tesla 'intends' to do a lot of things, but rarely seems to be able to actually DO them.


Ya, other than becoming the best selling EV company and revolutionizing the entire EV industry...


Doesn't necessarily mean they'll succeed at everything they try.


Well Mercedes-Benz apparently has gradually improved lane keeping all the way into autonomous driving. The Drive Pilot system is only at level 3 while Waymo is up at level 4, but consumers can actually buy the Mercedes product today and use it nationwide. It will be interesting to see how far they can push it.


They can not use the incredibly restrictive level 3 system nationwide, it can only be enabled on very specific highways in nevada or california in very specific situations (day time, clear weather, less than 40 mph, behind a lead car). Calling it level 3 is a marketing gimmick that you fell for.


> Calling it level 3 is a marketing gimmick that you fell for.

It's called Level 3 because it is level 3. Mercedes went through an approval process and carries insurance (or a bond iirc, there's a couple options) to comply with California law dictating the use of L3 features. You are legally allowed to stop paying attention under certain conditions and the restrictions to roads or situations is in no way disqualifying, nor is geobounding to only states you are legally allowed to operate in. Also, its available across all of Germany.

The way its actually a marketing gimmick is how few Mercedes has actually made available and the exorbitant cost. They've been allowed to sell in California since June of last year and only have 65 available and 1 sold as of April:

https://fortune.com/2024/04/18/mercedes-self-driving-autonom...


Wait, you think that accurately classifying a driving system as level 3 is a marketing gimmick?

Your mind is going to be blown when you hear about Tesla and the name they give their assisted cruise control.


Mercedes works in others countries, Wayno in the Bay area


> I’m gonna stop here before I start mocking geohot… I seriously can’t believe the journalists who wrote those early stories were willing to risk their lives like that…

I have a comma.ai in our minivan and it works great. Much better than Honda's built in lane following tech


Lol Tesla has made significant progress and doesn't show much sign of slowing down. There's no reason to think there approach can't work at this point. People go weeks without intervention.


> People go weeks without intervention.

https://www.teslafsdtracker.com puts miles to disengagement at 30 and miles to critical disengagement at 300 for all v12.x.y versions. Note: this is crowdsourced data and the users themselves get to decide what's critical and what's not.

As far as numbers required to make it fully self driving, it's at least 3 orders of magnitude worse than the big players. Waymo and Cruise routinely had 30,000+ miles per disengagement during their California testing. That's one disengagement for roughly 3 years of driving.


TBF robotaxis have a much higher duty cycle than a private car, so much sooner than 3 years. Something like twice per year per taxi, I'd estimate. But that doesn't take away from what a travesty it is to claim FSD will be robotaxiing anytime soon. 30 miles to disengagement would amount to 5-7 times per day per taxi.

On top of all that, Waymo does intensive 3d mapping for their service locations. These maps have to be maintained. Then the cars need sensors that take advantage of those 3D maps. If that combination of intensive mapping and LIDAR sensing turns out to be necessary to get beyond FSD's current and near future performance, then Tesla isn't even at the starting line.


>People go weeks without intervention

Well, being as how people get to pick and choose when they use it, and that the driver has to remain vigilant at all times, I'm not surprised.

But this is easy to test: stick random people in the car and go to random locations with FSD, see how it works. Why haven't they demonstrated this yet?


I had the FSD trial for one month.

I am very skeptical of the "weeks without intervention". It's cool technology, but I never had a single trip where I didn't need to intervene at least once.

It would regularly blow through school zones, failing to read the posted sign.

On a couple of occasions it veered off the road on to the shoulder.

My thinking is the car will never be level 4. It doesn't have sufficient sensors or NN compute power.



I don't know why people are so triggered by bottom up approach vs top down by Waymo vs Tesla.

Tesla already silently abandoned the "just over the AIR one day" approach with a dedicated car announcement.

However the camera+ultra-sonic radars but no lidar is not only Tesla vision, but other companies too.

We don't know what it costs Waymo to operate their car. The fact that they charge money doesn't make them a real business, just as people paying for FSD doesn't make it a real business.

Both are promises until a breakthrough occurs. Waymo is starting small-scale but for a full setup, even if guided by humans here and there. Tesla starts with millions of cars and multiple countries but with far modest functionality.

Waymo is scaling up; Tesla FSD finally starts to look like the promise, with a high chance of a ride with 0 disengagements still on the scale of many countries and launching it also on a different continent right now.

It's interesting to observe how companies with radically different approaches are about to arrive at the same goal almost simultaneously.


Tesla is still years behind Waymo: "FSD 12.3 seems superior to Waymo’s technology circa 2018, it’s not as good as Waymo’s technology at the end of 2020"

https://arstechnica.com/cars/2024/05/on-self-driving-waymo-i...


The author's opinion is based on one intervention from one trip:

> "The version of FSD I tried in March [of 2024] was clearly not ready for driverless operation. For example, I had to intervene to prevent the Model X from running over a plastic lane divider, a mistake Waymo would not have made in 2020. So while FSD 12.3 seems superior to Waymo’s technology circa 2018, it’s not as good as Waymo’s technology at the end of 2020."


While true, if you go to page 1 of the article you find this:

> During a 45-minute test drive in a Tesla Model X, I had to intervene twice to correct mistakes by the FSD software. In contrast, I rode in driverless Waymo vehicles for more than two hours and didn’t notice a single mistake.

That seems pretty significant.


While Waymo is only trying to enable itself on 1-2 particular highways in western America, Tesla's FSD can work hours with no interventions on the highways, do exits, and automate pretty much the entire trip on a highway in any place in the US, Canada, and China.

Again, it's just different approaches to solve the problem.


For the record Waymo taxis are not limited to highways in Phoenix, not sure about how they operate elsewhere.


A 45’ drive is not sufficient data for a serious review.


It is if it disengages or requires intervention twice. The major players are at tens of thousands of miles between disengagement. We’re talking days or weeks of non-stop driving without disengagements.


Disengagement on a Tesla (manual override of the specific driver, riding on an unknown route) vs Waymo (pre-mapped tiny geographical area, no info on what counts as disengagement) can not be comparable.


They are both pursuing the same goal: making a self-driving car. That makes them comparable. There's nothing stopping Tesla from pre-mapping; their decision not to do that doesn't suddenly make them incomparable.

The only differentiating factor is that Tesla's approach has theoretical benefits, if they can prove it works. If they could hit the same disengagements/mile rate as Waymo, they would likely be ahead since they don't need pre-mapping.

That being said, theoretical benefits are worth about the same as monopoly money. Until they can demonstrate that they can get the same performance without pre-mapping and LIDAR, it's all just conjecture. There are no points awarded for "yeah, well if mine had worked it would have been better!" They might get there eventually, but the current signs don't seem promising. At least not for getting there before Waymo completely eats the market.

Also, at least Waymo's disengagements are known. They're defined by the DMV, it's basically "any time control needs to be taken away from the autopilot". If a human in the car manually takes over, that's a disengagement. If the car decides it can't drive safely and prompts a human to take over, that's a disengagement. If someone has to log in to the car with a joystick to get it somewhere, that's a disengagement.

The only confusing situation I'm aware of is when remote techs give the car waypoints, like if it gets confused in a parking lot. I don't believe that counts as a disengagement because the car is still driving itself, it's basically just failing at pathfinding. That seems reasonable to me, because it's not a safety risk at all, just an annoyance to the business.


Choosing between apples and oranges would be “pursuing the goal of eating”, but you’re still comparing apples with oranges. People are doing 2-3hour drives with FSD 12 with no disengagement. You should check out the recent progress. Meanwhile, Waymo is somehow going to map out in full detail entire countries in order to ever scale up.

Btw, the definition of disengagement is up to the companies, not the DMV https://thelastdriverlicenseholder.com/2024/02/03/2023-disen...


I used to work for one of these companies (wasn't really a fan of them, and it wasn't Waymo nor Tesla). There's certainly some level of creating optimistic disengagement numbers, but not on the scale of orders of magnitude; my guesstimate would be on the order of 5% or 10%.

They can't fudge them too hard without the CA DMV yanking their license.

> People are doing 2-3hour drives with FSD 12 with no disengagement. You should check out the recent progress.

2 or 3 hours is impressive in a vacuum, but not all that much compared to the other players. It's an accomplishment for sure, and would have been overall impressive in 2018 or 2020. In 2024, it's only impressive because it can do it without lidar or radar, and even then that may only be because Tesla is the only one doing that so there's really not a comparison point.

It's certainly still an accomplishment, but I'm not convinced that it's economically relevant at this point. Best of luck to them.

To be clear, I dislike Tesla, but I do hope this works out because the alternatives are much too expensive for consumers to actually own. They'll have to use an Uber/Lyft kind of system because the lidar sensors alone cost more than a Lamborghini last I heard.

They do appear to be having basically the exact issues people thought they would have without lidar, though. Namely that different types of sensors are vulnerable to different types of phantom objects, and it's difficult to eliminate those without either having a different sensor that doesn't see those phantoms to sanity check against, or creating issues with failing to detect real objects.

A couple of examples: Video is prone to detecting the stick cyclist on road signs as an actual cyclist, where it's obvious on lidar that it's a road sign. Inversely, lidar can read steam (like that coming out of a sewer manhole) as a physical object where video can tell that it isn't.

It's difficult to use a single signal to fix either of those without having knock on effects that cause failures to detect real objects. It's much easier to use a combination of signals to disintermediate. Cyclists are not flat, so lidar tells you it's not a cyclist.

> Meanwhile, Waymo is somehow going to map out in full detail entire countries in order to ever scale up.

They're owned by the preeminent digital maps provider, who likely has other uses for that data and will be able to package and re-sell it to the other vendors that require mapping data.

It's also worth pointing out that population, travel routes and income are not equally distributed. It's almost certainly an 80/20 problem where mapping 20% of a country lets you handle 80% of the rides. They can reach ~10% of the US population by mapping a number of cities you can count on your fingers. Highways are incredibly easy to map, so it wouldn't be difficult to interconnect those cities.


I'm using FSD every day in NYC. It is one of the hardest environments in the world.

99% of interventions I had were from a driver's experience perspective, not the actual safety.

It's still far away from 99.999 that you would expect.


> at the end of 2020

It seems silly to analyze a 4 year old version of something that is changing extremely rapidly.

Both Waymo and FSD have come a very long way since 2020.


You are misunderstanding the quote.

FSD 12.3 is the current version of Tesla's self driving software. The article is comparing that current version of FSD to Waymo's 2020 state and saying that Tesla's self driving code today is worse than Waymo's 4 year ago. That is, according to the quote, Tesla is more than 4 years behind.


Ah, gotcha, thanks. You are right, I misunderstood


I think it's because Tesla's approach seems unsafe and/or misleading (to the people reacting negatively to it).


Yes, and they occasionally drive into stationary objects at high speed.


I just left a version of this in another thread—I live in Phoenix and now take Waymo regularly, and it seems like we're close to a world in which most people take self-driving cars most of the time, crash rates plummet, and these kinds of articles come to resemble articles from 1910 about horse-related problems.

Humans suck at driving: https://jakeseliger.com/2019/12/16/maybe-cars-are-just-reall...

Waymos avoid many of the Uber challenges: foul-smelling "air fresheners," dubious music / talk radio choices, etc.


I live in SF, and I take it daily. It's cheaper than paying for the parking garage near the office. And it's cheaper than Uber: the base rate is similar to Uber's, but there is no need to add a tip.

Waymo sometimes does weird, unexpected things - but safely. Once it seemed to change its mind about the optimal route a few times over the course of 10 seconds, switching safely between two lanes back and forth a few times before committing. It used its turn signal fine, and the lanes were clear, so it wasn't a problem, but this isn't something humans do.

Sometimes it behaves oddly, but I have developed confidence that it will do those odd things safely.


>Once it seemed to change its mind about the optimal route a few times over the course of 10 seconds, switching safely between two lanes back and forth a few times before committing. It used its turn signal fine, and the lanes were clear, so it wasn't a problem, but this isn't something humans do.

Oh, I disagree, this is something I observe and in fact do myself quite a lot. We all run it through our minds which route might be the quickest spending on certain factors. The difference is Waymo (or any tech) will base this on actual data (i.e., getting there quicker) vs humans who will be more emotionally driven (i.e., frustration at the driver in front, wanting to take the more scenic route, being undecided about stopping at that cafe halfway).

I'm all for self driving in highly populated areas. In a perfect world I'd like to see it integrated into all vehicles, and when entering specific areas you are told your car will enter self-driving mode. Arguably this makes the most business sense for Waymo, licence the underlying tech to manufacturers that already have capacity to produce vehicles vs compete.


Yes, but switching back and forth multiple times? I admit to having done even this before too, but I certainly didn't feel proud of myself after. A really good human driver would avoid this kind of conduct by having a (just slight) bias towards decision "stickiness" to avoid looking silly. This isn't purely aesthetic-- looking silly or bizarre, even if technically safe and legal and effecient, in your driving behavior can attract police attention (not a concern for self driving I suppose).

That said I admit if these are the kinds of complaints we are discussing, as opposed to the kinds Uber attracted (like running a woman over in Nevada), Waymo must be doing pretty well. These are nitpicks to gradually address, not fundamental issues. Kudos to waymo, it was always obvious they were nearly the only player seriously trying


This tracks with how the messaging about Waymo has changed.

Early on, they had those concept cars that looked like they belonged at Disneyland or in a Chevron commercial. Then, they started modding off-the-shelf cars at talking up the Waymo Driver. I think at some point they decided their core competence would be self-driving specifically, leaving the "car of the future" bit to traditional car companies.


> We all run it through our minds which route might be the quickest spending on certain factors. The difference is Waymo (or any tech) will base this on actual data (i.e., getting there quicker) vs humans who will be more emotionally driven [...]

I expect that robot taxis will be both consumers and producers of that actual data. They will likely report the traffic conditions they experience back to the company that runs the robot taxi service, and that will become input to the rest of the fleet.

If the time it takes for observations from a given robot taxi to be incorporated into the data received by other robot taxis is short enough it might be possible to get interesting feedback loops. It may even be possible to get oscillations.


Yes! I dream of traffic that moves and integrates seamlessly almost like a school of fish, because of near-instant communication between vehicles - an automotive hive mind. Imagine not needing traffic lights because each car at the intersection knows when it is their turn...but I know we'll f it up somehow.

Still, a person can dream.


Agreed on this - think wayve is attempting this - building out the tech to license to manufacturers. Honestly makes the most sense and love the idea that all cars can have this and take over driving in specific areas.


I have video from my dashcam of a Waymo taxi doing a sudden three lane change, in moderately heavy traffic, to do a left turn to enter a freeway. This was a month or so ago. I really hope a human was involved in that. If not, there’s no way I would consider riding in one. If an officer had seen it, they would likely have written a ticket to a human.


Human drivers cross multiple lanes if heavy traffic all the time and certainly aren’t ticketed.


why are we tipping uber drivers?


Because they are humans who need to eat.


that argument applies to literally every single person to ever exist. do you tip every one you interact with on a daily basis? coffee server, bus driver, lunch server, office cleaner, hairdresser, grocery assistant...

maybe it's a cultural difference


I agree with your sentiment, but traditionally (in the United States) cab drivers are one of the few service providers that one is culturally expected to tip.


After two of my (women) friends were assaulted by Uber/Lyft drivers, a weird smell is the least of my fears. If I'm sending someone on a ride late a night, Waymo's lack of driver is a huge reason to prefer them over Uber/Lyft. But only if the destination is in a safe neighborhood. A human driver's going to be able to make a better assessment of if it's safe to let someone off somewhere, vs Waymos will randomly drop you off blocks away from your destination.

As far as humans suck at driving, it's not that they suck on average, but that the ones who do suck at it don't always have a sticker saying that they suck.


It is also that they suck on average.


Also, the really fantastic ones will be excellent for years and years, and then one day they're slightly sleepy.


And that one day they will take the taxi.

Averages don't work for risk evaluation.



Or get a text and get distracted at just the wrong time


These days to qualify as excellent you need good phone discipline.

Eg, turn off the phone or turn off notifications, even vibrations. Or at least make a strict rule to ignore it while moving and only look while fully stopped, eg at a stoplight although that's just decent driver grade not excellent driver, because of the temptation the notification will present each time.

A great driver also reviews the route for a couple of minutes before leaving in order to reduce reliance on GPS-- you still use GPS but because it's not the first time you've seen the material, already know the shape of the route, and just need reminders to encourage you you're on the right track, the GPS will genuinely steal far less attention. The two minutes will be well spent, and may save lots of time, because it vastly reduces the likelihood of wrong turns.


I had to give up on the smart watch because being able to turn your wrist and read a text is a really bad feature when driving, and I didn't trust myself with it.


Maybe if you're under 25 and have always lived in a dense city this seems like a valid take. Taxis aren't new, they have always existed. Just because they're driven by computers now isn't going to magically change all the reasons that people didn't use them before (hint: it wasn't because they were driven by humans).

No one with kids wants to ride in taxis with kids all the time. Ditto for anyone with hobbies that require transporting large things, like kayaks, bikes, etc. Or people with large pets. Or grocery shopping for more than 1-2 people. Or any of the dozens of other conveniences that Americans have come to expect from owning a car over the past century.


I have kids and don't like Taxis, but I'm not sure I entirely agree with your take. The idea of a humanless Taxi showing up to my house sounds way more appealing to me.

I can take my time to get car seats in and kids buckled, without feeling the pressure to hurry from the human driver.

I don't have to feel like my kids misbehaving are going to annoy a human driver, or get me a bad review in Uber/Lyft.

I don't have to worry about tipping, or the driver taking a longer route to charge me more.

I don't have to worry about small-talk, or awkwardly sitting in silence when I normally would be talking with those I'm driving with.

Obviously this doesn't cover all use cases for a car (pretty sure you can't load a kayak onto a Waymo because you'd block sensors), but it seems WAY better to me as someone who doesn't like to deal with the people aspect of Taxis.


> Obviously this doesn't cover all use cases for a car (pretty sure you can't load a kayak onto a Waymo because you'd block sensors), but it seems WAY better to me as someone who doesn't like to deal with the people aspect of Taxis.

In a world where waymo works as a taxi, it also works to deliver a human-drivable rental car right to your door (and send it on to the next customer when you're done with it).

So now the short term car rental user experience should be dramatically better, even if the robotaxi isn't appropriate for all the tasks.


> I don't have to worry about tipping, or the driver taking a longer route to charge me more. I don't have to worry about small-talk, or awkwardly sitting in silence when I normally would be talking with those I'm driving with.

This or an equivalent will arrive to a robo taxi near you when the service inevitably gets enshittified to hell and back.

Ads, trips shared with other humans, pay extra for heated seats, etc.


How do you make sure there isn’t human semen or worse on the seats on which you and your children will sit? At least with taxis driven by humans you knew that there was someone making sure that won’t happen that often, but with driverless taxis all bets are off.


As someone with kids... What if there is? The presence of dried Semen on a seat on which my kids sit will have zero negative impact on them. At all.

Is it Yuck? Yes of course. But it also seems extremely unlikely. And it's a lot less Yuck than thinking about what's in the sand in public playgrounds that all kids visit constantly. And while I have a reasonable chance of preventing them from licking the seats, I have no chance of preventing them from eating some sand.

This is just a bizarre irrational worry...


> This is just a bizarre irrational worry...

Do you and your kids take public transport regularly? In a failing city/society, that is.

Because if the answer is "yes, I do take public transport in a city that struggles to pay its bills and I still don't care that the chairs have weird organic substances on them" then fair-play to you, but for me personally at some point I had to purchase a personal car (when I was already approaching my mid-30s) because I just couldn't convince myself anymore that it is ok to not want to sit down inside of a train ("better stand up by the window, that seat is too dirty").

And these robo-taxis will be worse than public transport, for the main reason that there's no-one "standing guard" inside of them (and, no, Big Brother cameras placed inside of them, which should be a dedicated topic all by itself, btw, really won't change a thing in that respect).


I also shake strangers hands and eat food made by teenagers and use public bathrooms and pet cats and sleep in hotel rooms. My children (if/when they occur) will attend daycare and school with other sacks of disease and share the spoils with me, as is tradition. Our society is dirty.

Do you avoid handrails and sanitize the doorknobs and gas pumps you interact with? Those are going to be far worse than subway chairs. It's probably not a bad idea, but it's a bit beyond what I suspect most people consider normal. I had a friend who lived like that, and he ended up being diagnosed with OCD.


> And these robo-taxis will be worse than public transport

If you soil a waymo taxi, they can ban you from ever booking another one, the same cannot be said for public transport

While I consider this somewhat dystopian, I do think it's pretty clear that they will be much cleaner than public transport for that reason


Just so we're clear: You're agreeing right? You're irrational about this and as a result it's damaged your life.

You reached a situation where everyday things are "too dirty" and rather than realising that's a mental health problem and you might need to fix that, you... found an expensive and elaborate coping strategy which necessitates further crazy beliefs.


Nothing like being psychoanalized by some tech people who have most probably not taken public transport in a long time.


Is that not just a general issue in public? Park bench? Back of the bus? There could be human semen everywhere! It's why I always insist on wrapping my kids in plastic before letting them leave the house.


Well, how will taxi-sex enjoyers make sure that no toddler vomited on the seat they intend to frolic on?


Waymos have been far cleaner than the Ubers I've ridden in. I think they are cleaned daily?


I wonder how this affects the price. Is the price you're paying now the real price, or a subsidised price?


Surely they have cameras? I’d trust that a lot more than a taxi driver keeping their cab clean.


When has that stopped anyone?


It may not stop someone the first time, but it's not hard to block the reserving account and prevent a second time from occurring. For most people the threat of being placed on the "no-fly" list is sufficient to ensure prosocial behavior.


Do you think that Waymo doesn't have cameras inside the car?


If you have smaller kids, most taxi rides are completely illegal in any western country and very dangerous for kids. Ever saw a taxi with 2 spare child seats or at least boosters? They are required by law for very good reasons, and those reasons are kids dying or ending up crippled even in relatively mild crashes.

That's just one tiny example out of sea of examples.


In many countries, taxis are often specifically exempted from child seat mandates. Doesn't make them any safer, but it's not necessarily illegal.


I've taken taxis with infants and small children several dozen times. It doesn't take me more than about twenty seconds to pop the car seat out of the stroller and latch it into the back seat of a taxi, for slightly older kids we have a few lightweight folding booster seats, and personally I've found that a car seat vest is particularly handy and lightweight.


A quick search online shows that your statement is misinformation, taxis can be exempted in many "western" countries.

Before making such bold claims, you've got to know a little tiny bit of the topic or at least look it up to make sure.


> I don't have to worry about tipping

I can almost guarantee that if they don’t already, the robo-taxis will eventually start asking for tips.

This is already the case at self-checkout in some stores for example.

As long as the companies can get away with it, they will tack on any number of extra fees and charges even if those fees and charges really don’t make any sense.

Hell, even tipping people does not really make sense the way it works in some places. A person working for a company should receive enough pay from the company itself that they don’t have to actually rely on tips in order to make enough money to survive. Tips should be a nice extra that customers willingly add because of good service. Not a forced extra percentage that they have to pay on every transaction just so that the company can pay less to their employees.


I do not accept your guarantee.


I actually wonder if “tip the development team” makes sense (assuming tipping is gratuity and not because an employer pays below-living wages). Weirdly it might even lead to quality improvement because low tip areas could be detected and debugged.


They won't be tips as such, but the average amount of the tip will be baked into the price.

If people are willing to pay base charge + tip for an Uber, then that is what robo-taxies will charge too. Especially if one company is allowed to keep a monopoly on the technology.


There will be bullshit booking charges etc though. The thrust of their point is largely correct. I really wish people were more sensitive to being stiffed like we are by these sorts of stupid charges … that they arent means there is often no alternative.


I'll take bullshit charges that I can know about upfront over being nagged for tips every day.


The point of bullshit charges is often to obscure total cost upfront. We are absolutely rotten with them in the UK. Not sure the US is better but I think they are really anticompetitive because they create a cost to discovering the true cost of goods. I have a personal policy of not buying if I discover stupid charges at the end of a sales pipeline but it’s sometimes incredibly inconvenient.


Autonomous systems can ask for tips but humans would feel much more comfortable not tipping.


I used to live in China where taxi usage was much more ubiquitous, and…you really live to live in a world where you aren’t expected to live in a car, be it with public transit (Europe, Japan) or public transit + lots of taxis (China) or tuktuks or whatever. But yes, your hobbies tend to be different and adapted, bikes, for example, get you places, and are not taken places, or you get them on the train which actually hits the trail head you want to use. You rent the kayak on site, and there is always a place to do that because lots of other people are in the same car-less boat as you are.

You mention American at the end of your comment, but the rest of the world isn’t the same. Waymo doesn’t really have to limit itself to the states once they get the concept worked out.


Having to depend on crappy rented sports equipment sounds miserable. Maybe people in the rest of the world will tolerate that but I want no part of it. I'll continue buying my own personal large vehicles so that I can fill them with as much stuff as I want.

When I'm out doing something, the car also serves as a reasonably secure private locker where I can store things without carrying them around.


Sure, I don't imagine they care much about changing everyones opinions. What will happen is your kids and grand kids will reach an age where they would need to get a driving license but won't see the point because they've already been using robo-taxi's for 5 or so years already.

Hell there's people who just use uber exclusively now.


Unlikely. One kid already has a drivers license and uses it frequently. I'll make sure the other kid gets one as soon as she's eligible.

Not every young person spends all day hiding in their room, doom scrolling on social media. Some of them have to get to sports practice with bulky equipment in places that public transit and robo-taxis don't go.


You just live in a world where that is possible and common. I just mentioned that much of the rest of the world isn’t like that all. Japan, for example, has trains/trams straight to campgrounds, or at least buses. They still manage with sports somehow, but you can imagine that the dynamics are very different than from what you are used to.


You are an American and have that privilege. If we granted that privilege to everyone else, the world probably couldn’t handle it, self driving taxis or not.

What’s more I don’t think we will have that privilege for long, personal transportation is a luxury in most of the world, it is becoming a luxury here also. We will adapt though like everyone else has.


This will work for some people in some sports, but it's hardly a universal solution. Many activities like rock climbing or backcountry hiking/skiing, will never have good public transport access.

Renting gear is fine for casual users, but serious practioners in almost every sport are very particular about their gear.


I think you’re wrong about most of the scenarios on your list. And once the market is mature, I can imagine it would be great to be picked up in a minivan after a days cycling somewhere new and not on a loop route.

Americans have become emotionally attached to cars because of what they enable them to do. That might take a while to die. But in Europe cars are more of a pita to own and run because we have less space. I don’t have any great love for mine. As soon as waymo gets here and is reasonably priced I’ll get rid of my car.


Yeah, Americans just have more space, and America is just far larger, and Americans often do relatively long road trips to places where other modes of transportation are not possible or prohibitively expensive. I don't think that is ever going to die, nor should it.


I don’t know… I think a very big reason why people don’t take taxis is because they are very expensive especially for longer rides. This seems like a thing robo taxis might change. If the driver goes away, they shouldn’t be much more expensive than e.g. car rentals.


This will boil down to availability and price. Taxis are generally just too expensive to use often and also waits are too long. Of course, I'm comparing cost of frequent taxis vs buying a used car.


In a place like Singapore, where a taxi ride is $10 but a Corolla starts from $100,000, the equation will strongly favor robotaxis for everyone.


The other problem is the economics flips over once you also have to own a car. The marginal cost of each trip goes down … but if waymo is good enough for 95% of the time the it might not be necessary any more.


I personally don't think the price will ever drop low enough for it to make sense to most people to drop cars entirely. The US is notorious for being unwalkable, save for a few select areas.

Working remotely helps a lot since there's no need to drive every day, but it's only a fraction of overall people who have this privilege.

Still, I'm looking forward to seeing Waymo at my town. It would make a good DD and backup in case my car needs repairs, etc.


That is because the choice is own car or taxi: the automobile is the *only* supported choice for mobility in much of the United States, to the detriment of any other mode of transport.

People in the Netherlands get fine without a car: kids just bike to school with their friends instead of sitting in the backseat in traffic for 45m every morning. This is because money and space is not spent exclusively in car infrastructure, but cycling and walking and public transport.


My family has a couple cars but we still ride with our kids in taxis all the time, for example to the airport or into/out of the city. Even hauling bikes isn't insurmountable -- we've taken weeklong bike camping trips with friends and because biking in a big circle isn't as much fun we hire a bigger vehicle that can haul a dozen bikes to the starting point.

> Just because they're driven by computers now isn't going to magically change all the reasons that people didn't use them before (hint: it wasn't because they were driven by humans).

Sort of. The primary reason I don't hire vehicles more often is cost, which is related to the human driver. The wealthiest families I know are much more likely to use a car service to ferry family members around.

If there was a car service that could whisk us to school, work, grocery shopping, etc with no more than 15 minutes advanced notice for less than the cumulative cost of a similarly-sized private vehicle I'd sell one of our cars in a heartbeat. I have no idea whether that future is years or decades away, but when it occurs many families I know would go from 2 or 3 cars down to 1.

I'll admit that going from 1 car to 0 cars would be a tougher sell. For that I'd have to be confident in five nines of availability and vehicles that can haul equipment like bikes and kayaks. But that doesn't seem like an insurmountable problem, just a logistical one that'll take a bit longer.


All these examples are casual rides, while the context was about taking the taxi daily to work. Of course you can keep your car for the weekend drive to the mall, or to the slopes, or if you're a soccer mom, but most employed people will definitely save the daily commute. Expectations change in face of convenience.


The only problem with Taxis is that they are expensive und possibly not available. Both of these issues are very much the kind of thing a robotaxi might fix.


I've said this on HN before as well, but I've turned into a full-on Waymo evangelist (Los Angeles user here). Couple of things to add to Jake's comment...

The driving experience itself is on par with the "best" drivers I've ever ridden with (things like stopping at actual stop signs, for instance, and not racing from one traffic light to the next, and being courteous to bikes and pedestrians), not to mention just the peace and tranquility of being in a car solo when you're not having to drive (I know, I know, mass transit is better for countless reasons and this is actually doubling down on human isolation which is probably not great long term). Anyway, I have zero interest in getting into an Uber at this point. I'd wait longer and pay more for a Waymo if given the choice. And I'm fully aware people will, if this works more broadly, lose jobs bc of it. I'm not insensitive to that, but I don't think the genie is going back in the bottle barring catastrophic incidents by Waymo et al that cause regulators to kill self-driving cars altogether. Note that I did witness an incident where on a road with no lane markings the Waymo straddled a left turn "lane" and a straight-travel lane. It's an intersection I transit often and normal drivers have great trouble with and frankly makes me uneasy every time I turn left there as well. The Waymo was definitely perplexed by it.

For those who talk about how Phoenix's roads are straight and wide... This is not true in Los Angeles (nor in SF though SF is more of a compact grid than LA). For those of you unfamiliar, a lot of the streets in LA where Waymo operates today are very narrow, with cars parked on both sides and so there's inadequate room for two cars to go down them without waiting for another car to pass. These same streets have zero lane markings on them. I've experienced this several times in Waymo to date where the car just "gets it," though it's almost too cautious when it needs to get over to let another car pass when there's not enough space for both. And if you read all of that and say "what about the weather?" It's obviously an issue and I fully agree it will delay the rollout "everywhere."

All that said, I cannot wait until I can jump in one of these things, from Waymo or any other company, and safely go up to the mountains or some other road-trip destination. The economics of longer trips, particularly to rural areas, are likely tricky bc of the inability to count on a return fare, but, man, I do think self-driving cars are a radically important technology that will vastly change how we transit and, really, how we live. That is, if they don't fuck up too much en route to getting there.


Honest question out of curiosity, since you seem genuine and open to discussion…

I agree with all of your points about Waymo vs. Uber-like ridesharing—the average Uber ride is so much less safe that it’s hard to argue for.

But I also agree with your aside about the growing isolation of society—the longer term implications of every event, meal, and errand being separated by autonomous journeys are staggering.

So the question is, how do the societal isolation factors play into your decision making? (Honest question, not a gotcha, I’m curious how others think about these tradeoffs.)


If you're already inclined towards isolation, like I am sometimes, driverless taxies will help with that. But if you're inclined towards going out and doing things, which I also am sometimes, there are few incentives more alluring than a fast and cheap way to get from point A to point B. If labor and gasoline are removed from the equation there's no reason rides can't be ridiculously cheap, and spending $20 on a round trip instead of $80 lowers one of the biggest barriers for going out (at least in urban areas and/or when drinking/drugs are involved).


Cheaper, safer and more effective transportation seems likely to increase mobility and decrease isolation.


I can spend more times at my friend's place, maybe have a beer or two without having to worry about driving back - so I think it encourages socialisation


I'm not sure if you mean about Waymo/self-driving cars or more broadly, but I'll assume you mean cars. Let me first say I'd love to create a list of all of the long-term pros and cons of self-driving cars because I'd be far better-equipped to answer, but my off-the-cuff thought: this technology, if it survives, will make it easier, safer, less stressful and less costly for people to transit, and will also make almost every place more livable (the impacts will be more profound in urban areas than rural, but both will benefit). That sounds like a great way to increase interactivity, not lessen it.


Uber sometimes offers a service called UberPool where you share the car with another passenger in order to save money right? Couldn't Waymo do the same?


Didn't Uber start branded as a "ride sharing" app where the app helped you find someone to car pool into work with?

I suspect an underlying issue with socialising is faith in humanity. It's hard to have faith in humanity in the modern world when every front appears to be telling you otherwise. If you don't have faith in humanity, then you're limited to only interacting with those: "you have to" and "are vetted".


That was Lyft. Uber started as an easier/cheaper way to call for a ride in a “black car”.


They could in theory yeah. I’m not sure if Uber still offers it, but I think its uptake is so low (anecdotally from people I know) that it’s effectively not a solution to societal isolation, because it doesn’t end up being used.


I thought it was pretty well used pre-covid, especially when the rides were sometimes 50% of a regular ride.

At least I personally used it a lot, and knew several people that did.


How is that different than if you were driving yourself?


Well, I’d say it’s different in a similar way that Uber’s are different from driving yourself. For physical trips it’s similar, but lower barrier to entry, so you do it more. And for deliveries it’s a much lower barrier because you don’t drive at all.


Can you just clarify in the situation where, 'the Waymo straddled a left turn "lane" and a straight-travel lane', whether the behaviour of the Waymo was 'safe' although obviously incorrect for multiple reasons?


It was absolutely incorrect on the Waymo's part. I was trying to counterbalance my positivity with a mistake I've seen a Waymo make. That said, I would not call it unsafe because where it was located it was not going to lead to an accident. It was confused by the intersection, which also happens to human drivers at that intersection. That is not to excuse the Waymo ("oh, it's just like a person so that's ok!"), just trying to point out that it may be a great driver (again, my opinion), but doesn't mean it's infallible. Of course others on this thread have pointed out statistics about how Waymos are faring, but I was just trying to share my experience riding in one.


Thanks. As a human driver sometimes I feel like slowing down and being in the wrong place because the intersection is confusing is actually the safest action. I'm happy if an AI fails in the same way.


> I live in Phoenix and now take Waymo regularly, and it seems like we're close to a world in which most people take self-driving cars most of the time

I live in a big city (larger population than Phoenix) in the Uk and I've never even seen a self-driving car. Anywhere. I don't even think such a thing exists on public roads in my country. That Gibson quote about the future not being evenly distributed, etc.

Just a data-point.


Waymo is basically unique in offering Level 4 Self Driving (hence this article) and they only do this in a small number of locations in the US, such as (parts of) Phoenix - so, yes, you're correct that in the UK, or indeed anywhere outside of those few locations, there aren't real "Self Driving" cars.

You won't know if people have Level 3 "Self Driving" cars because unlike Level 4, the Level 3 cars always have a human sat in the driving seat, it's just that maybe the human isn't paying attention and maybe the car is driving anyway. It may be difficult to gauge (beyond guessing) how many people you see are bad drivers and how many aren't actually driving at all under L3...

L1 (the machine does some of the work but a human driver is always doing much of the driving) is certainly something you see and don't even think about. Intelligent Cruise control (ie it won't smack into the car ahead but instead slow down) on a motorway, maybe automatic lane keeping on somebody's fancier or newer car, it's not "Self driving" as you'd understand it, but it's something.

The way these "Levels" work is L3 to L4 is the point where we transition from "The human is legally driving but the machine is offering more and more assistance" to "The machine is legally driving and the human is asked less and less often to do anything at all". As a result a person who is literally blind and thus couldn't possible drive the car or obtain a license to do so - can (and they do) use a Waymo, just like they'd use an Uber, but they cannot do the same with Tesla "Full Self Driving".


There are only a handful of Level 3 autonomous driving systems in existence. The Mercedes-Benz Drive Pilot system illuminates exterior turquoise lights to indicate when it's active so you don't have to guess who is driving. I'm not sure whether Drive Pilot is available in the UK yet.

https://www.autocar.co.uk/car-news/technology/mercedes-use-t...


> I live in a big city (larger population than Phoenix) in the Uk

That’s an interesting way of saying you live in London ;)

(Phoenix urban area is more populous than every urban area in the UK except for London)


Manchester not London.

According to Kagi the population of Greater Manchester is 2.8 million vs 1.61 for Phoenix.


You're comparing a county area (Greater Manchester) with the direct city population.

Phoenix's metro population alone is 4.8m.


You're right. Thanks for pointing that out.

I think my point stands, that even large urban areas in the UK have no SDVs.


Sure, it's also worth pointing out nobody in Pheonix has ever seen a person with universal healthcare or free university education.

Countries develop at different rates on different things.


Humans are astonishingly and unreasonably good at driving. There are, indeed, a lot of traffic deaths but this is because we drive a mind-boggling amount so even a very low rate of fatalities adds up to a substantial number.

A significant portion of traffic deaths also occur in special conditions-- at night, with intoxicated persons, in bad weather.

Existing self driving cars won't even drive in those more difficult conditions.

In terms of the passenger miles driven if you compare to non-intoxicate humans the expected number of deaths for self driving cars is still below 1 if they were as safe as non-intoxicated human drivers.

Safer cars are an excellent goal but they're not automatically a given result for self driving.

> Waymos avoid many of the Uber challenges: foul-smelling "air fresheners," dubious music / talk radio choices, etc.

And introduces new ones like being dropped off blocks from your destination because the car refuses to drive on perfectly fine roads, service being unavailable in poor weather, and extending Google's tracking of everything you do online to offline.

:D

Aside, you can just ask uber drivers to turn off the radio.


> Humans are astonishingly and unreasonably good at driving. There are, indeed, a lot of traffic deaths but this is because we drive a mind-boggling amount so even a very low rate of fatalities adds up to a substantial number

To put some numbers on it in the US cars are driven about 3.2 x 10^12 miles per year, and around 4 x 10^4 people are killed in car accidents (drivers, passengers, pedestrians, and cyclists).

That's one death per 8 x 10^7 miles.

There are around 2 x 10^6 people non-fatally injured in car accidents per year in the US. That's an injury every 1.6 x 10^6 miles.

There are around 4 x 10^6 non-injury car accidents per year in the US, which is one every 8 x 10^5 miles.

If we assume all miles driving are equally risky and that we drive 40 miles per day 365 days a year, then we would expect to be in a non-injury car accident around once every 55 years, be injured in a car accident around once every 110 years, and be killed in a car accident around once every 5500 years.

Of course almost no one drives all their miles at times and in conditions when the risk per mile is average so when estimating your personal risk you need to take that into account.


An injury every 1.6m miles isn't amazing. That is almost a 1/3 lifetime chance of injury if you drive an average of 12k miles annually for 50 years. (Sidenote: human units are easier to read).

A comment I wrote 3 years ago has more: https://news.ycombinator.com/item?id=26950254

The old "look at the person to your left, now look to the person on your right" meme comes to mind. One of you will probably have an accident with an injury in your lifetime.

I ran the same calculation for dying in a car accident and got a lifetime probability of 0.7%, but I'm not sure I did it right.


I disagree.

Phoenix has the perfect climate for self-driving cars.

It will require a major technological leap in order for them to succeed in the "real world" (fog, rain, snow, etc).


Disclosure: I work for Waymo.

We handle both dense fog and heavy rain on the latest vehicles. The best blog post is probably https://waymo.com/blog/2021/11/a-fog-blog/ but you can find a lot of videos in the rain.

Snow and very cold weather is a challenge for sensor cleaning. We've done some testing in both NYC and Buffalo (https://waymo.com/blog/2023/11/road-trip-how-our-cross-count...) to collect data.


I'm rooting for you, but I live in snow country and always have a chuckle any time someone says self driving will be ready soon. There's so many situations that need to be handled when driving in winter and some of them I can't even imagine how you'd address in software.

Winter here changes daily between

- no road lines visible

- snow packed into ice randomly making the road a a camouflage pattern

- snow is fresh/deep so no road is visible and you navigate based on the slight hump in the snow where you know there's a curb

- same as above, but instead of a curb a slight indent where there's a ditch

- slush piles outside the tire lanes, which if hit will suck you in or cause you to spin out

- ice/snow on hills, so time your arrival for rolling stops at intersections because stopping is not an option

- active snowfall (limited camera vision, and I'm guessing reduced/useless signal from lidar)

- hail

- sporadic black ice (its easy to slow down when its icy everywhere, but knowing when and where black ice is likely when it's sporadic is a skill)

- the "lanes" formed by peoples tires in the snow often don't align with the official road, and sometimes a lane goes missing in this situation

And all that's after you deal with sensor cleaning.


What will happen is that you just won't be able to go anywhere until the roads are cleared. That is probably not a bad thing, it will allow plow trucks to clear the roads more quickly without having to navigate around people driving (or trying to drive) on the uncleared roads, spun out/stuck vehicles, crashes, etc.


We're not really at self driving if the solution for poor conditions is "don't drive".

Snow drifts in the wind, side roads don't get regularly plowed, and conditions change rapidly so unless they can handle the majority of the list above then they won't be able to handle winter period.


I'm saying that the limits of self-driving cars will be an excuse to force us to behave the way the authorities want us to behave. For better or worse.


Winter here is 4 to 5 months out of the year. When it snows, everyone goes about their day normally, just at a slower pace. The authorities have no problem with that, and having drivers on the road does not affect plowing in any meaningful manner.

I'm not sure why you are trying to justify such a large shortcoming - for most people having your car be randomly unusable for 1/3 of the year would be a big deal.


I’m not saying I would trust getting into a Waymo now in those conditions, but I also wouldn’t assume the same things that are difficult for humans will be difficult for self driving. I’m optimistic these hurdles can be overcome.


Oh I'm also optimistic they can be overcome, I'm just less optimistic on the timeline. I'll be pleasantly surprised if self driving can handle winter a few decades from now.


In the same vein, I don't ride uber often but when I do I often find that drivers leave their windows closed and car's air circulation turned off completely. When I ask for "a bit of airflow" they apparently hear it as "I'm too hot", so they turn AC to maximum power.

I'm not sure whether this reflects their own preferences, what they think customers want, or if they are just completely oblivious.


Don't be too timid to tell them what you want. If they turn out the AC to max instead, just say "Sorry, I'm not hot I just want some fresh air."


I have never had trouble cracking open the window on my own. Do you not try?


The child safety disable is often turned on for backseat windows.


The olfactory problem with waymos is that if someone gets in one dirty or foul smelling there isn't a driver to kick them out. Waymos are starting to get more ridership and some of those people are going to be absolute pigs. I gave feedback to Waymo about a recent ride where the entire car smelled like a fat, unwashed ass and the best case scenario there is that they took the car out of rotation immediately, the rider right before me left it that way and that the rider will be identified after several instances of those reports. The reality is probably that the car picked up several riders after me until one reported a problem mid-ride.


> I gave feedback to Waymo about a recent ride where the entire car smelled like a fat, unwashed ass

Knowing BigCo reputation, I think it’s equally possible that Waymo and/or BigCo accounts will be banned for actual perp, complainant or random rider in-between… what a world…


Wouldn't take long before that gets abused I bet. I'm picturing a world where someone knows their ex uses Waymo to get home every day so requests rides around that time/place so they can report them as odorous/damaged/vomit/etc and get the prior rider banned.


Need to add some smell sensors to the inside of the car, as well as the cameras


They do have always on cameras inside.


Waymo is currently under investigation for multiple incidents, not all of which it had previously disclosed to the NHTSA [0]. The recent light pole incident also doesn't help [1].

If they are doing 50k rides a day, then they would appear to have a remarkable safety record.

It will be interesting to see if these investigations lead to a repeat of the Cruise debacle or if this will become the price of doing business.

[0] https://www.reuters.com/business/autos-transportation/us-saf...

[1] https://www.youtube.com/watch?v=HAZP-RNSr0s


Anecdata, but watching the Waymo cars compared to Cruise (preban) was night and day. Before Cruise was banned in SF, I would often see them violate traffic laws and fail to navigate basic intersections. Waymo isn't perfect, but its better than Cruise and the average SF driver, which is good enough for me.


Anecdata 2, I bike through SF almost daily, and much prefer a Waymo driving near me as opposed to your average SF driver.


that's cool. until it's not. it's very easy to release an upgrade of stopping less on stop signs and see data increasing profit and not increasing accidents. same with code updates that will make cyclist life worse, unless there's actual change in a kpi they track. you're not really their main concern, specially after they ipo and get acquired by Apollo or billionaire du jour


In the United States there are few legal repercussions when a human driver kills someone as long as they are sober and utter the phrase "I didn't see them". Therefore, biking on US roads means trusting in the inherent goodness (and attentiveness) of the drivers around you.

Driverless cars run by a company protecting itself from reputational and legal risk seems less dystopian than the status quo.


Yes, I don't understand how anybody who's ever ridden a bike in a major American city isn't super excited about high-quality self driving vehicles. The crazy stuff I see on a daily basis while out biking in Seattle (and statistically we are one of the best places to bike in the US) means I can't wait until these things take over :-)


how can you read my comment and infer that killing someone is a metric they don't care about??? that's musk level shit.


I apologize if I was unclear. In response to a cyclist saying they prefer being near Waymo vehicles to human drivers you said:

>> that's cool. until it's not...same with code updates that will make cyclist life worse...you're not really their main concern

I agree and expect that the wide safety tolerances driverless cars currently have will become tighter as they gain more experience, and that this will make them more efficient but potentially less pleasant to be around than they used to be.

But even if pedestrian and cyclists lives are not a main concern for self-driving car companies, some concern is better than none. For some human drivers their concerns seem to be things like not getting arrested, getting to their destination as quickly as possible, checking social media to satisfy their boredom, and not scratching the paint on their vehicle. Some drivers consider vulnerable road users like cyclists to be sub-human [1].

My point is that the bar in the US has been set so incredibly low that even if the code updates make their products worse for cyclists than they used to be or even killing some vulnerable road users that may still be safer and preferable to the incompetence and complete indifference on the part of human drivers.

Having said that, the same calculus may not apply in countries that don't issue drivers a license to kill people, so the bar for driverless cars is likely to be much higher in such places.

[1] https://www.sciencedirect.com/science/article/pii/S136984782...


What does that even mean?

If Waymo hits a cyclist which leads to death, and Waymo is found to be at fault, that's definitely going to make headlines and potentially lead to a pause of the entire operation.


We’ve already seen regulators are willing to take strong actions against dangerous operators.

These regulators should be supported and kept clear of regulatory capture. Other countries can do this, so should the US.


That's just being a cynic for cynicism sake. They are already owned by a billionaire company, so there is no ipo. And they still have at least a couple of decades where the game they need to play is get riders and legislators to trust them, so they are incentived to make their car very safeful so they can roll out to more cities and countries. It takes one bad accident to get the public to turn against them, and there is no technological edge that can save you if the government decides to make your entire business illegal.


Waymo is a subsidiary of Alphabet.

And not sure why you think running stop signs or any anti safety measures would increase profits.


Because once the current safety scrutiny has passed you might get more trips done by setting the ai to be more aggressive in traffic. Then you are into VW style software updates with a profit motive and no mechanism to hold them accountable?


> no mechanism to hold them accountable

Like banning them altogether following a public outcry? That is the mechanism to hold them accountable.

Also in individual cases it will be very easy to sue them for accidents they caused or contributed to. Already is.

Where does this “no mechanism to hold them accountable” comes from?


>And not sure why you think running stop signs or any anti safety measures would increase profits.

Because this big companies like Google are actually evil. As an example the mobile YouTube app does not let you use it if you turn off the screen. So Google decided that wasting energy and killing batteries is an acceptable thing to do, this is pure evil - I would accept they adding more advertising or whatever but killing the life span of a device and wasting energy is truly evil shit.


The overall safety record is amazingly good: https://arstechnica.com/cars/2023/12/human-drivers-crash-a-l...


Waymo has notably escaped any investigation of the "Prius vs Camry" crash induced during unsafe testing done in pursuit of a demo https://www.newyorker.com/magazine/2018/10/22/did-uber-steal...

> The car went onto a freeway, where it travelled past an on-ramp. According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic, but Google’s software wasn’t prepared for this scenario. The cars continued speeding down the freeway side by side. The Camry’s driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median. Levandowski, who was acting as the safety driver, swerved hard to avoid colliding with the Camry, causing Taylor to injure his spine so severely that he eventually required multiple surgeries.

> Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.

> According to former Google executives, in Project Chauffeur’s early years there were more than a dozen accidents, at least three of which were serious. One of Google’s first test cars, nicknamed kitt, was rear-ended by a pickup truck after it braked suddenly, because it couldn’t distinguish between a yellow and a red traffic light. Two of the Google employees who were in the car later sought medical treatment.

It was a long time ago, but Larry Page was well aware of it, and imagine if that incident received fair coverage and investigation.


I am having trouble imagining this scenario in a way that makes Waymo look as bad as you imply. It sounds like the human-driven vehicle if it was "boxed in" on an on-ramp needed to slow and merge, rather than racing to pass on the right, running off the road, and causing a spectacular single-vehicle wreck. The way it's described in that paragraph seems to be ironclad proof of the need to promptly relieve humans of driving tasks.


It doesn't make the tech look bad, but to me it makes the safety driver & the other executive look callous and uncaring.

> They didn’t go back to check on the other driver or to see if anyone else had been hurt

They should have made sure the driver was okay.


> It doesn't make the tech look bad, but to me it makes the safety driver & the other executive look callous and uncaring.

The safety driver was Anthony Levandowski, who left Google for Uber, taking with him a bunch of stolen IP, at Uber ran a cowboy self-driving car division that got pedestrians killed, Levandowski got sued by Google, ended up in prison and Uber laid off the entire division. Later he was pardoned by Trump.

So good news - the callous and uncaring safety driver has been fired, sued, and imprisoned.


Larry Page knew about the crash and tried to retain him


Yeah, the worst read about the car here would be "it's not very courteous in merge situations" in which case I implore anyone reading to drive in Maryland one single time.


I don't understand the downvotes of the parent post.

I am unfamiliar with the details of this incident and my reading based on the facts presented is similar.

Could someone provide more information?


threads on hn brings people from the mentioned company. all root comments saying bad things will always get downvotes.


This is especially true for comments that disagree with Googlers, likely because there are soooo many Googlers now and they (and Waymo) have highly aligned perspectives. Especially on Google-launch-related posts, 5-10 point swings in 24hrs can happen.

A good piece by an ex-Googler on his 2-year journey toward recognizing the scale of institutional thought at Google: https://mtlynch.io/why-i-quit-google/


> causing Taylor to injure his spine so severely that he eventually required multiple surgeries

I recognize accident lawyer work when I see one :) They charged Waymo’s insurance to the max.


Levandowski stole Waymo trade secrets, and only escaped the full consequences of his actions because of a Trump pardon. He is not representative of anything about Waymo in 2024.


Larry Page was an ardent supporter of Levandowski and this evidence illustrates Waymo’s core safety culture: that they’re above regulation and above the law. Same mindset illustrated in Google’s anti-trust trials.


As a proponent of good public transportation I'm a bit afraid that automated taxis will get big enough in USA that they will start influence city wide decisions on how to develop city transport network even in Europe when time comes for them to expand their business.


I have the opposite take/hope.

Self driving buses will be such a boon for public transportation. Now you can have 24 hour buses, that operate on holidays as well, or even dynamic, short term routes based on demand (eg: after a concert or sports event), without being dependent on the availability of pre-allocated human drivers.


No need to be afraid. Public transport will evolve to include small autonomous vehicles. The economies of scale you get by packing people in larger vehicles mostly have to do with the cost of fuel and staffing.

Electrical autonomous vehicles don't have a need for a driver and electricity is relatively cheap. So you don't get much economies of scale by making them bigger. Most city journeys would be under a kwh. Even at current grid pricing that's cheap.

Eventually, cheap autonomous vehicles could be mass produced at low cost and would have very low operational cost. So the ride cost would be comparable to, or lower than, current public transport options.


> The economies of scale you get by packing people in larger vehicles mostly have to do with the cost of fuel and staffing.

Trains (and train-like options such as metros) are vastly more efficient than cars in number of people moved per unit of time per area used. That might not be a big deal in suburbia, but in dense inner cities it's one of the most important drivers of public transport.


Autonomous vehicles could chain up or drive really close together and achieve similar space efficiency. Also, if you look at a train track. It's mostly empty space with a train passing occasionally. Very different than a well used road. And autonomous vehicles could collaborate to counter any congestion.

What makes trains efficient has more to do with the cost of energy and drivers than anything else. Both of those go away if you have autonomous electrical vehicles.


That "train passing occasionally" holds the equivalent of hundreds of cars.

A random crappy light rail line will do the equivalent of 5 lanes of traffic (each direction). A serious subway more like 20.

https://visual.ly/community/Infographics/transportation/solu...

Even if you run cars with no distance between the bumpers you'll still need room for changing lanes, crossing and the line.


The New York subway moves 3.2 million people per day. Taxis and ride share options are together good for about 1 million. Mostly non autonomous and polluting of course. If you triple that with autonomous vehicles that use the road smarter, it sounds doable to match what the subway is moving around. And of course people would be using these for point to point traffic and cars would be able to re-route based on traffic as well.

We'll see how this plays out.


and now compare how much land surface metro needs and how much cars. and are you sure that current streets would fit 3x cars.


These comparisons are never quite apples-to-apples because the heavy rail line assumes only a few passengers have any luggage and that the entire line is used only for passengers, whereas the capacity rating for a road allows everyone to take large amounts of luggage and freight is always mixed in as well.

You also have to take into account all the other factors that make roads preferable, for example, that rail capacity number assumes perfect utilization. In practice railways often have lots of downtime due to overnight shutdowns, broken signals/trains and labor strikes. None of these affect the roads.


Where can I buy these roads that don't require maintenance ? Cars that never break down ? Road traffic signals that never fail ? For the first I'm asking for a friend as the road outside her front door is currently torn open for a whole month and it sure seems like "Just magically never do that" would have been a better option if you insist it's so easy.

The lines near me have freight on them (I live in a port city, a noticeable fraction of the country's imports and exports go via intermodal containers on trains) and still run like two services to London and two to other big cities per hour. The freight has to fit in between passenger services, that's a policy decision and the US just picked wrong.


One thing I try to do is read stuff from people whose job it is to do stuff. What I read from transportation people is that for dense urban areas cars are inefficient at loading and unloading. And self driving cars are the worst.

My rando observation is the few times something has gone really wrong on BART traffic is bad enough that it'd be better to take the day off. Ditto if a truck overturns on one of the bridges. And the latter happens way more often than the former. Reminds me after the Loma Prieta earthquake the bay bridge was out for a month but BART was running 12 hours later.


By “transportation people” do you mean rail fans or avid cyclists? It’s an easy mistake to make - civil engineers with a more favorable attitude to cars (or even to rail) are more likely to be employed and therefore relatively quiet than rail fans or cyclists, who spend their time doing advocacy.


Roads can tolerate a lot of damage and be repaired whilst still in use (at less capacity). Capacity might fall but repairs are signalled in advance and people can plan around them, traffic will flow anyway.

In places with big rail networks you're going to hear "I didn't make it into work this morning because of signal failures" really often. And when railways are repaired they will be totally closed for months. They're just way more brittle and unable to degrade gracefully.


You are moving the goalposts here. However if you want to compare mixed freight and passenger lines then rail equals many, many lanes.

Plenty of rail cars have room for luggage too or even baggage cars. Usually we are talking about people going to/from work however hence most people are not carrying lots.

Are you really saying your roads don't have shutdowns for maintenance? Or other problems? Also it is pretty rare for either roads nor railways to be fully utilised overnight. Also note that plenty of rail systems do run overnight.


Rail can support freight of course and complements trucks very well. Rapid transit can’t really support freight if it wants to operate at any reasonable frequency - you just can’t load any real quantity of freight in a few minutes. Even baggage is stretching it a lot of the time on busy lines. And the slow acceleration times and length of freight trains severely conflicts with the needs of passenger rail. Therefore, freight and passenger rail need to be separated for either to perform well. On the other hand, cars and trucks are a lot more similar in performance and size, so they can share the same roads just fine - it’s only an issue if the road is completely choked with trucks.


Not according to civil engineers (it's not even close)

Private motor vehicles: 600 - 1600 per hour

Mixed traffic with frequent buses: 1000 - 2800 per hour

Two-way protected bikeway: 7500 per hour

Dedicated transit lane: 4000 - 8000 per hour

Sidewalk: 9000 per hour

On-street transitway, bus or rail: 10000 - 25000 per hour

https://nacto.org/publication/transit-street-design-guide/in...


That graph those numbers are pulled from should incorporate speed. Otherwise, you could say sidewalks are more efficient than cars.


Well, they are - you can move more people with a sidewalk :P. That being said, obviously most people aren't going to walk much more than maybe a km before looking for alternatives.

Usually the comparison comes up in cities where throughput is the limiting factor, and cars end up moving at near-walking pace anyway.


Urban train tracks see more than occasional traffic, whatever that means. And those trains often carry hundreds of people. Meanwhile, a well used road is well used by huge vehicles carrying 1.2 pax on average.

And energy isn't free. If we had any intention of becoming net zero, electricity prices had better increase. And driving around 2t empty weight isn't the way to get there.

Incidentally, a trip that's less than 1 kWh (so, less than 6 km) is a trip that could easily be made on foot or by bike.


1. Still not as passenger dense as a train or a bus

2. They will need stop some time. Where? Will this block the street?

3. They won’t all go to the same place so there will be delays at junction and side streets

4. No margin for error wouldn’t fly in practice, so the cars cannot be that close together

5. How will pedestrians cross a train of cars?

I just don’t see how this all adds up. Automation doesn’t remove the space constraints of cars in cities.


Roads already occupy massive fractions of most cities. Making cars drive closer together or collaborate will make roads environments more hostile to pedestrians than they already are - and arguably in many places we shouldn't even really have private drivers aside from business users on the roads, given the massively disproportionate space they require vs a pedestrian or PT vehicle.

A train line has 10x the capacity of a single lane of road. Even if trains are only coming every few minutes, its impossible to compete with a train carrying 1k people with cars. Perhaps reasonably loaded busses would be comparable or better, but that's not the argument you're making.

Trains are financially efficient because of cost of energy and drivers (and, arguably, roads + cars move much of the expense to the public, where as everything related to the train is on the operator's balance sheet), but they are also very space efficient, compared to roads + cars.


obligatory link to the road space picture that shows why this would not be a good move for urban transit systems:

https://danielbowen.com/2012/09/19/road-space-photo/


People are already arguing against trains and bus lanes since “one day there will be self driving cars”.


Autonomous Trains.

If you are not paying for the conductor, can’t you make trains much more appealing? They could run every five minutes, and last mile can be solved with autonomous car that is waiting for you when you arrive.


I'm sorry but good public transit in the US isn't going to happen. Passenger rail has never been profitable anywhere since its very inception. With the rise of remote work, and declining ratios of working-age populations putting increasing pressure on public finances, we're just never going to see a widespread expansion of public transit.

AVs give us a path toward a world where very few people need to own their own car. We can put all those parking spaces to better use. We can improve equity by giving more people access to safe, reliable, affordable, and convenient point-to-point transportation. Being able to consistently get a ride to where you need to go is something we consistently under-appreciate. It means being able to get a better paying job on the other side of town. Or not having to worry about missing a dialysis appointment, or a meeting with your parole officer or therapist. When the marginal cost of a robotaxi/robobus ride is close to zero is when the AI economic boom will really begin.


> Passenger rail has never been profitable anywhere since its very inception.

Interestingly, no one ever argued for the profitability of cars, so all we can do now is to calculate the overall economic costs and societal benefits and that's where public transport clearly and easily wins.


And the day the Google bot decides to close your account for obscure reasons, with no recourse, all you can do is stay in bed and starve cause all these things are now inaccessible to you ? Even if self driving actually happens, it'll be the ultimate surveillance-ridden, enshitiffied service that will ruin not just the internet but our whole lives and cities.


How exactly will a robotaxi ride ever reach zero marginal cost?


With competition. The marginal cost for each ride is just cheap, abundant energy from renewables, and maintenance.


Don’t be. Autonomous vehicles can be busses too.


Taxis are a form of public transportation. After all, what's the major difference between a taxi and a bus other than capacity/driver attention?


Technically they are, yes, because they're open to the public.

But the impact of taxis on road traffic in a dense city is comparable to the impact of private cars - perhaps even more so as they're often travelling empty between rides. If every journey which was previously done with a car is done with a taxi, there's no reduction in vehicle traffic - meaning the same problems of congestion and pedestrian safety.

Driverless cars can probably drive closer on highways to increase throughput, but that doesn't really help in cities or residential areas. Ultimately if lots of people shift to driverless taxis to get around, there will be far more vehicles on our streets.


Taxis are often worse since they drive around looking for fares.


Major differences of Bus/Tram/Metro vs Car (robo or not) is number of passengers that can be transported per "time"/"dolar/"citi space used". And my feeling is that cars are not on the wining side here. And remember Bus/Tram/Metro can also be driverless.


Robotaxis are "a real business"? Maybe in the future, but not yet. From the article:

> But even the most bullish believers in autonomous transportation acknowledge the tech still has a ways to go before it’s reliable enough for widespread deployment on U.S. roads.


I was blown away going around Tempe/Scottsdale - Waymos everywhere with people walking around, crossing streets randomly to get to a spring training game, doing bar crawls (it was st patricks day) and what blew me away was they pulled up in front of hotel and even made a quick u-turn to get out of the parking lot. I mean this is really impressive stuff. The future is now imho.

I will give tempe/scottsdale credit though - they have their roads around the major tourist hubs in GREAT shape - the lines crisp and the lights bright and new - I think it makes it much easier for a waymo to get around.


Waymos do the same thing in SF where the streets are much denser, traffic is weirder, hills are way steeper, and the roads aren't in perfect shape by any means. The amount of impressive navigation I've seen around delivery trucks, weird construction patterns, etc has been pretty wild. They seem way ahead of the other options on the road.


Could some of that impressive driving have been done by remote human operators?


Not from what I've seen - it happens in real-time just like most human drivers. I don't believe that a remote controlled operator is even permitted, but I could be wrong.

Waymos drive fairly fast and aggressively in rush hour traffic too, which is why I enjoy sharing the road with them. I was initially worried they'd drive like a grandma but that hasn't been the case. Also as a cyclist I enjoy riding near them because they know I'm there and they give you enough space in the bike lane.


They're the present here in SF, driving every day and more safely than the humans do. And as a human driver, I can testify that these streets are not particularly easy to drive on.


They are here in the Phoenix area too and I have not seen and issues with them. However, we are blessed by sunny weather 99% of the time. I think the biggest challenges will be having them drive in adverse weather conditions present throughout the rest of the country, such as blizzards, hail, torrential rain, dense fog.


I don't think blizzards, hail, torrential rain, and dense fog will be particularly challenging for waymos, at least not keeping the card driving in a controlled a predictable state.

The hard thing is that every other human car acts randomly because they don't say, have winter tires, and unlike waymo, don't have the very quick control loop.


Oh they have issues. Waymos are super janky when they are in parking lots, often just sitting in the middle of the driving lane waiting for their fare. I dont think they know how to park in a lot correctly.

It is also funny to watch them get stuck behind busses, having followed too close to safely go around them when the bus stops to pickup/dropoff.

Also Ive seen multiple instances of them trying to turn left on red and pull far enough into an intersection to cause issues.

Finally when I am on my skateboard they dont seem to recognize me as they drive very very close and fast, though I havent felt risky enough to really test this.


Interesting, as a cyclist Waymos have always been very aware of me and usually slow down for me if I've taken the lane, and stop for me if it's starting to move after picking up someone but sees me pass. I wonder if their training set doesn't have enough skateboarders.


We have abundant fog and seasonal rain in SF, but not much hail or snow.

That's why they've done winter testing in Tahoe (since 2017) and Buffalo NY (last winter).


Waymo is likely already better than a upper-quartile meat module under those conditions. https://x.com/Boenau/status/1795495310170685915


You know, that’s a challenge for human drivers as well. Try getting an Uber during those weather events, their might be one or two running crazily to get super surge pricing when there are usually a few hundred.


Not just sunny weather, but straight streets laid out in a N/S and E/W pattern, very little grade, and consistent numbering\naming across cities.


I used one when it was somehow raining in Phoenix and it worked fine apart from being a bit confused by a puddle when stopping.


Yes, they're actually awesome. 'Waymo' has replaced 'Uber' in my vocabulary, e.g. "Let's just Waymo there".


How much does the average trip go for distances such as your use case?


On average, within SF (say, a 3-4 mile trip), is typically ~$20. Peak times on Friday or Saturday evenings can definitely exceed $35, but I've yet to have any ride go above $50.


Does it matter if you enter with multiple people? Does the charge go up (whereas Uber/Lyft grants you 3+ seats by default)?


Doesn't matter, and the total is 4 people.


> How much does the average trip go for distances such as your use case?

They seem to undercut Uber and Lyft by a hair. Given the longer wait times, and lack of a need to tip, that seems fair. (In San Francisco, they undercut by a wider margin. But human ride shares are more expensive there for a variety of reasons.)


That's not my experience. I've rarely seen a comparably priced ride. They're usually at least 50% more expensive than Uber or Lyft.


> usually at least 50% more expensive than Uber or Lyft

Hmm, I used them this weekend and was comparing pricing. Waymo was cheaper. That said, I wasn’t riding during peak traffic. And in peak traffic, I’d vastly prefer a Waymo. So I get the premium pricing.


In my experience it seems that the price floor is higher, but the ceiling is lower. Short rides (~10 min) that might cost 8-9$ (before tip) on Uber / Lyft, cost $18-20 on Waymo. If I'm going to the other side of San Francisco (say, North Beach to Stonestown), it will be $33-35. I've never seen it in the $40s-50s as I often do with Uber / Lyft for long distance trips during surge pricing. But that's just anecdotal.


Why is that? Waymo has lower labor costs right?


Not if you count the billions that have been sank into developing the product. Also, there's no incentive to undercut market rate for rides, that's just leaving money on the table. Collusion but also not.


Does a business have to be widely deployed or profitable to be real? The public and private capital markets say "no". If you were to ignore any business that isn't widely available you'd miss the beginning of both Apple and Facebook.

Waymo is a real business serving 50,000 rides each week delivering paying customers to their destination. If you haven't tried it yet, the product is amazing. Private, doesn't cancel, safe, and smooth. I will never take Uber again if I have the choice.


Profitability is my definition of "real business" vs. for example, a lot of SV unicorns: if the business cannot sustain itself financially from its core revenue stream, and needs cash injections from "investors" then it's a pyramid scheme, not a business.


How many humans are involved in this so-called driverless service? Waymo won’t say, but Cruise admitted [1] that about 1.5 people were actively monitoring and ready to take over control for every Cruise car. That’s not a sustainable business.

How much money is Waymo bleeding every quarter? Maybe the investors don’t care, but it’s relevant if you want to call it a real business.

[1] https://news.ycombinator.com/item?id=38145997


I took one in Chandler last year, and it was amazing. Especially in combination with my Tesla and its "Full Self Driving". It kept up with traffic, turned confidently, yielded to pedestrians, etc.

The biggest (only?) complaint I had is that it would not pickup/dropoff at the curb at our hotel. So if it was raining, we'd have had to walk out in the rain to meet the car in a parking spot.


Cars are a real business despite being unable to navigate open water. What matters is that it’s profitable, not whether it works everywhere. Waymo appears to be unit profitable on a cash basis. (It’s far from recouping its investment into R&D.)


FWIW: I feel safer driving and walking by to one than I do the average human driver.


The 1% worst human drivers are really quite unpredictable. You're never certain with any human driver whether they're going to drive like a maniac, but you do know you're not getting that with a Waymo.


With a human I can look at their face to see that they see me. How do I know a robot sees me? Do they have indications that say “I see you and won’t drive” like a driver looking at you in the face and waving?


With tinted windows and glare I already can't see like 75% of drivers. At least driverless cars reliably signal so I don't have to do pose estimation with the front wheels to cross the street. Also they aren't on their phones.


Streaming live video non-stop to the Internet, arguably Waymos are "on the phone" more than humans :p


So answer my question, how do I know the Waymo sees me? With a human I can tell 90% of the time where I live (we have tint laws and stuff).

I can’t see trusting that a robot sees me, otherwise I guess I’m just giving deference to them now?

At least with humans I can see if they are distracted, I can yell at them to get attention, perhaps others. With a robot taxi, I can’t tell anything about it, and I’m a software dev so I absolutely don’t trust anything running software made by a corporation in search of profits, or any other software really, to operate a car in open roads. Not yet at least, maybe in a few decades.


That’s not all of the possible cases.

“In 2022, 13,524 people died in alcohol-impaired driving traffic deaths.”

Source nhtsa.gov

You’re gonna briefly glimpse a passed out drunk driver before it plows through you at 40 mph


You still didn’t answer my question about robot taxis, you just did another “what about x“.


Drivers can stare right at you as they run you over. Believing you have eye contact does not prevent you from being run over.


Anything "can" happen. But having driven and cycled many thousands of miles I'm convinced the probability of being run down/driven into by someone who has made eye contact is a lot lower than by someone who has not, so eye contact is a useful signal. Like GP I'd be happier if robotaxis could not only sense me but also send me a signal to inform me they had (beyond just changing their driving behaviour, although that is in itself a useful signal too of course.)


> like a driver looking at you in the face and waving

your reading comprehension sucks


Maybe sometimes, but not this time.

"With a human I can look at their face to see that they see me."


Did you read the whole thing buddy? Guess not! I guess reading isn’t your thing.


As a cyclist/pedestrian I'm used to drivers ignoring me anyway, YMMV


"Robo taxis" as they currently exist are pretty obviously not the long term goal, it's not an interesting business, it's just an easy test platform that recoups some costs.

The real business is an entire transit system, with purpose-built vehicles of various sizes, centralized routing, etc.


Fortune is where you go to buy fantasy headlines. If Fortune says it out loud, you know, it's almost certainly false, but someone somewhere really wants you to be fooled.


> ... and made robo-taxis a real business

Has it though? They've come an impressively long way to have 50,000 rides a week, but that needs to increase a thousand fold to justify the $6B of venture capital and $30B valuation. That's a lot of cars and a lot more work than it takes Uber to bring on another underpaid owner driver (Uber has 23 million rides per day)


Waymo was smart to start with taxis. A self-driving car's competition is you, and of course you're an above-average driver. But a taxi's competition is the average Uber driver. People can be more objective about that low bar.


New York City is not going to see robo taxis for a very, very long time. We are just now catching up to the rest of the cities in terms of how garbage is collected, after about a hundred years.


Is that an issue? It's a dense city which puts the pedestrian on top of the transit hierarchy. There's no need for cars there. Phoenix is car-dependent sprawl and even after removing parking minimums, upzoning, and running BRT or other high LOS transit, you're still looking at large parts of the city where it'll probably take decades to be viable to run transit to and are impossible to walk to. In SF the core is walkable and has great transit but the moment you get to the outer neighborhoods transit LOS decreases significantly and there's large parts unserviced. Waymo makes more sense there. Plus SF is abutted by suburbs with retirement-age-and-above populations who fight densification at every turn where it's impossible to get from city-center to city-center via any form of transit.


Sadly I agree: NYC users will trash the cars.


It's not just that - it's a crazy city and we all jaywalk. No automated vehicle can handle NYC, barring dedicated streets (which I feel like is what will happen).


-1: Waymo has proven itself adept at avoiding jaywalkers, bicycles, etc.


Plus subway bits that are from the 1930s


I much prefer Waymos as a pedestrian. They always stop at stop signs.


The most common place for pedestrians to die is on the side of low speed highways at night. They're typically struck from behind.

The other common mode is secondary to an original crash. Vehicles either are pushed into different roadways, over abutments, or down hills, which causes the vehicle to roll or otherwise crash into pedestrian areas without warning. This is most common in winter conditions.


I don't think deaths are the sole problem, and also I don't think nationwide statistics are appropriate considering that San Francisco (where Waymo's robotaxi service area is) has approximately zero low-speed highways.


I wouldn't call it a "real business" just yet.

I've heard they do 50,000 rides per week in SF, LA, Phoenix combined.

Assuming they make $20/ride, that's still $1M/week, or $52M/year. I'm sure they spend in Billions/year.

They would have to scale out to every major city in America and add another 10000 cars before they can turn a profit.


It's only going to be a real business when they approach profitability--now they have massive losses.


Horses all retired. They successfully automated their jobs and now live in a post-scarcity Horseconomy


Side question: when Tesla finally gets FSD working, will I be able to use my Tesla to make money by using it as a taxi? Or will there be licensing issues? In other words, will I own it or not?


https://web.archive.org/web/20161129012459/https://www.tesla...

> Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

Of course that was in 2016 and as far as I’m aware we are still awaiting those details.


No need to worry about something that will never happen.


I would be more worried about liability issues before anything else.

Also, I doubt there is a point in you "renting" your Tesla. Tesla the company has enough money to flood the road with their vehicles, and your vehicle is irrelevant. Have you heard any individual renting their personal Camry to a taxi company or a Uber driver?


For folks that are interested in the business specifically, we have some open roles listed at https://waymo.com/careers/ with the word Commercialization in the titles.


Are multi passenger trips the next step? That is, driverless buses? And ones solving the traveling salesman problem with each new stop?

If yes, perhaps cities with fewer cars can skip the taxi step and go straight to smart buses.


I've long hoped for Uber to roll out on-demand door-to-door UberPool with 14 passenger vans. They could match riders to drivers and find the optimal route in real time, with pricing and travel time falling between public transit and UberX, all in the footprint of a F-150.

No need to wait for autonomy. They have (had?) such a service in Cairo, but unlicensed jitney vans were already common there. They never launched it elsewhere.


Waymos are not full robotaxis. It's an illusion provided by having the humans responsible for the cars in a remote location. We don't have transparency on how that system runs and how often the humans intervene. We also sweep under the rug many, many non-crash traffic incidents. Watch videos on Youtube of peoplr taking Waymos and you will see the cars do lots of dumb stuff. If there were lots more of them it would make traffic even more of a problem than it is today.


Waymo says they do not do remote controlled driving. The remote operators can only give hints to the self-driving system, such as "make a U-turn and take a different route". They don't trust the data connection to have good enough latency for remote control. Baidu, which has some self-driving cars in China, reportedly does use remote manual driving when necessary. Probably more than they admit, because it was a big thing that it was available for the 2022 Olympics.

BYD recently announced that they will not be using BYD's technology. Not good enough for production cars.

Waymo still has rather bulky rotating LIDAR scanners. That technology needs to shrink more before wide deployment. A few years ago, there were lots of LIDAR startups, but few LIDAR buyers, so that industry collapsed.


Correction: BYD recently announced that they will not be using Baidu's technology.


I can!t overstate how depressing it is we managed to figure out private self driving car fleets before we figured out modern public transit funding.


Has been figured out a long time ago, see other countries. It doesn't even need to be state owned, see Japan.


Isthere a foom full of Waymo employees watching over it all remotely and making the hard decisions?


Disclosure: I work for Waymo.

No, there isn't. We do have a team of folks to support situations where we aren't confident and choose to "phone a friend". This recent blog post covers some of it in more detail:

https://waymo.com/blog/2024/05/fleet-response/

Most importantly: at no point does someone remotely "drive" the vehicle. They can direct it to say "hey make a u-turn and go to this new point", but they aren't remotely driving.


> Most importantly: at no point does someone remotely "drive" the vehicle. They can direct it to say "hey make a u-turn and go to this new point", but they aren't remotely driving.

This doesn't line up with other statements made by Waymo though.

That blog post as an example:

> The Waymo Driver does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times.

Yet in an incident in January when a Waymo ran a red light and caused a moped to crash the narrative is

> In January, an incident took place where a Waymo robotaxi incorrectly went through a red light due to an incorrect command from a remote operator, as reported by Waymo.

So I'm curious, is it a case of "The Waymo Driver doesn't always follow road rules itself." or "Remote Ops can make a car run a red light against the Waymo Driver's programming."

[0]: https://www.forbes.com/sites/bradtempleton/2024/03/26/waymo-...


Judging from the red light incident, the Waymo Driver does not consider passing a red light to be a Never event, in the same way that say, hitting a pedestrian would be a Never event. So it's OK with being advised by a human that it needn't obey the red light whereas it wouldn't be OK with being told to just drive through cyclists.

That makes a kind of sense, a human shouldn't need to run red lights but they do it more often than you'd like, whereas they mustn't hit pedestrians (although sadly they sometimes do). Just the other day I was watching video of a failed London Underground signal which is stuck at red, the driver knows this, and the signaller knows this, and nevertheless the signaller (who is in a position to know as they've got a board full of position data for trains in their sector) has verified that crossing this signal despite the danger aspect is safe.

This happens so often (ie sometimes) that TFL has a recorded announcement to play to passengers when, as the driver, you're about to do this. The train, you see, doesn't know that what you're about to do is fine, so it's going to stop you. So as the announcement explains, the train will move forwards slowly, brake suddenly to a halt, and then after a moment proceed slowly again. The announcement suggests that passengers should sit down if able to do so. The driver having secured permissions from the signaller will drive forward ("at caution" ie slowly enough to stop short of any obstacle discovered), then the safety systems will detect the danger signal, braking the train to a halt, then the driver proceeds to drive slowly again because they already know why it stopped.


I've done it to get out the way of an emergency vehicle coming up behind me with full lights and sirens. It was kind of fun.


I don't know the law where you live, but in the UK this is illegal and the emergency vehicle should have instead silenced sirens (the lights are left running) so as to avoid agitating you since you can't do anything to help while you have no safe route.

The reason is that you may make things worse by running a light, the people driving an emergency vehicle have in some cases trained specifically to run lights (although not all reasons they're driving under lights and sirens would justify doing so) but they aren't trained to make it safe for you to do the same so you should not.


I feel like self driving cars is one area where Google's slow-and-steady culture just works better than the move-fast-and-break things culture, since the things that would break in this case are literal human bodies.


How scalable is this technology? From my experience at an AV company, every deployment site had to be mapped, and after deployment, sites still needed remote teleoperators to occasionally adjust waypoints and resolve stops.


The word "scalable" tends to be a shibboleth for people with certain unchangeable views on the industry. It's scalable in the sense that you can have vastly more cars than humans and the process to get them operating in a new area is fairly straightforward. It's not scalable in the sense of the company being fully automated so that every step can operate without human intervention, like virtually every other company on Earth.


Waymo's relatively rapid geonet expansion suggests that they have either improved their mapping productivity substantially or gone fully map-less.

I can't speak to their RVA operations.


Turns out infinite money, lawyers, lobbyists and engineering time gets you pretty far


Funny how it didn't take Uber that far.


Uber is barely even profitable let alone infinite anything


They have a fairly big weakness. If someone takes over their control these taxis are basically weapons. You have to admit that the question is not if this is possible but how will it pan out.


This is already true of many consumer vehicles and thus less exacerbated by Waymo than you imply


People can literally just go to a car rental service right now and do that as well. Or use a car service app. Or rob someone elses car.

Most people in society don't really have a desire to run amok.


I think OP means remotely control the car.


The problem is that you can't "remotely control the car". If you have the same level of access as Waymo's actual humans you can suggest to a Waymo driver (ie the machine) hey, the way to resolve this situation you are in might be to inch forward into this position - and maybe in some cases you can trick it into causing harm this way. However I'd guess in the vast majority of cases all you do is trap it so that it gets into a situation where it can't see any action it's allowed to take which gets closer to its goals - so it gives up and just sits there.

The most crucial insight for Self Driving Cars is that this is not the trolley problem. "I give up, stop where I am" is a valid answer.

We actually have built automation where "just give up" isn't a valid answer. CAT IIIc autoland (on a jet liner) has "Fail Active" scenarios where the machine concludes just before touch down that it no longer has confidence in its position due to one or more sudden sensor failures but the human pilots can't possibly intercede quickly enough to be safe and under IIIc conditiosn they can't see anything anyway, so, although the aeroplane will tell the human pilots that a failure occurred, it will nevertheless attempt to continue the now unsafe landing in this edge case. Most likely despite the reduced sensor validity, this is successful and everybody lives, and if not it's not as though the humans could have reacted in time anyway. But self driving isn't like that, the plane is flying, if it were to just stop flying everybody dies. In contrast a car can just stop and it's merely annoying.


> The problem is that you can't "remotely control the car".

The assumption is that the highly networked car has some kind of security vulnerability that allows malicious users to take control and perform acts of terror.

Since this is in software, the attacker can theoretically scale the attack to involve all cars on the road with the same vulnerabilities, which could be millions of vehicles.

This problem is not unique to self driving cars - just any highly networked car with software control of key systems, like a modern Tesla. However, a full self driving car may not manual overrides that allow the passenger to stop the vehicle.


In theory you could also hack the many self driving trains that exists around the globe.


Two things about the self-driving trains make that even less appealing than a Waymo

1. They're mostly even more local. The Waymo Driver is in your car, the "driver" for say a DLR train is inside the train too. Unlike Waymo they aren't running multiple live feeds to remote oversight, even the emergency human intervention is literally on board with you. There's somebody wearing a uniform telling those tourists that no, Abbey Road is an outer suburb with a sewage pumping station, they're on the wrong train for the famous Beatles photograph. The person in the uniform is trained to drive the train if there's some reason the automation can't do it, nobody can do that from a control room miles away. In the even higher (and rare) GoA systems where nobody aboard can drive the train even if they need to, remote oversight still may need to dispatch a specialist to rescue a failed train.

2. They're mostly "grade separated" that is, they're either underground or suspended in the air, or maybe in fenced off ground-level areas, so you can't use a "hacked" train to hurt anybody except its passengers or maybe, in some cases, passengers on a nearby train.


> so you can't use a "hacked" train to hurt anybody except its passengers or maybe, in some cases, passengers on a nearby train.

Yes, is that not enough?


I agree, but that’s an argument against cars in general.

Turns out cars are a bit too bulky and pricey to repurpose as shanks.


Real businesses make money. Waymo loses money hand over fist.


Certainly true, but they have very large backers who are willing to pour money in until the technology is perfected. I wonder if there's an internet forum appropriate for discussing such businesses?



How much do the remote safety drivers intervene, is this something we have good sources on?



For the last ten years or so, there's been an argument on this forum (and others) about whether cutting safety in the pursuit of an earlier rollout is a good thing. If it's beneficial to humanity, then a few deaths now is massively outweighed by the many deaths avoided by earlier rollout of this tech, or so the argument goes.

This seems like a good time to point out that the argument makes too many assumptions to be useful, like that moving fast and breaking things will in fact lead to faster progress overall. In the case of robotaxis, the group moving carefully and deliberately is the clear leader, and many competitors who took the faster/less careful approaches have shuttered along the way. When uber's self-driving division killed someone, for example, it didn't lead to an earlier arrival of self-driving.

This is relevant to all sorts of business stuff where we're always asked to move faster than we can reliably move. It's astoundingly easy to forget that sometimes bad rollouts can shutter a project even worse than slow rollouts.


Waymos also follow traffic laws with regards to loading-unloading zones, which is inconvenient for the passenger.

Compare to a rideshare driver that will often drop you off right in front of your destination, even if that is an illegal maneuver.


"made robo-taxis a real business" sounds like a dig at Enron Musk. I believe he first started calling self driving cars that.


Another article praising degenerate AI companies that kill/injure people, yuppie tech bros unite!


Wait till activists start setting SDCs on fire before calling it a “real business”.



Exactly. And there’s no dude there to prevent it.


A human driver's not going to stop a gang of hooligans from throwing a Molotov their way.


But they would. For one thing, a human would just drive away. For another, a mob would have no reason to attack a broke-ass taxi driver, while they have every moral justification to attack a $3T gigacorp built on surveillance capitalism.


They don't have any such moral justification, and they do attack broke-ass taxi drivers despite having no justification for that either (just substitute, e.g., "small locally owned shop" for "taxi driver" in the London riots).


Maybe the Tutsis had autonomous vehicles or were tools of surveillance capitalism? You can never be too sure right? After all, their attackers as you make clear must have had "every moral justification" to kill hundreds of thousands of people right ?


Seems pretty weird to compare destruction of insured property to Rwandan genocide, but hey man, you do you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: