Waymo is a taxi company. As you'd expect the customer isn't driving, isn't necessarily licensed to drive, certainly isn't authorised to drive, so necessarily the customer is not liable.
I'd actually anticipate that it makes sense in principle for them to self insure because that way they aren't paying a premium to a for-profit insurer and they have enough vehicles that the statistical risk evens out well enough. I believe that self-insurance† is still legal in the places where Waymo operates (I might be wrong).
† Self insurance is a strategy where you can and will just pay out of pocket, this obviously isn't viable for all but the richest individuals but it makes sense for say, FedEx.
Ngl I'm still waiting for delivery trucks that are essentially just mobile sets of parcel delivery boxes that you could schedule to be at your address at completely arbitrary times, unlock the right compartment and collect the package.
Then they just need to fill them up with a robot at the warehouse and off they go, zero human shenanigans in the process.
It’s with noting this research is a collaboration between Swiss Re and Waymo. That doesn’t invalidate the research necessarily, but warrants more skepticism than if it were an independent research by Swiss Re, like the article implies.
Any sharing of information or data could be construed as a collaboration, but this seems to go beyond that, as the research paper includes multiple Waymo employees as coauthors.
How is that worse than what I described? It seems like exactly what I said. It got in trouble and support had to step in. It was not being monitored by support all the time.
There are so many things to control for…
- compare to taxi / professional drivers
- locations
- times
Still a great nominal achievement but anytime a sponsored research study doesn’t even attempt to control for basic things it raises a lot of red flags.
> Swiss Re has more than 500,000 liability claims and more than 200 billion miles of exposure in its data bank. Waymo has logged 25.3 million fully autonomous miles available for analysis as well.
So looks like they took human-driven miles in all sorts of conditions, and compared it to Waymo's operation in a handful of southern states.
Although I would expect not using highways to be detrimental, statistically highways (especially controlled access highways) are the road types with the highest safety levels, thanks to having no intersections, more consistent speeds, and traffic separation.
The title is incorrect, and makes the mistake of applying a statistical result with each individual result.
The average improvements across miles driven by waymo are impressive, and definitely indicate that they are, on average, significantly safer than a human driven vehicle.
None of this means that there can't be a human driven vehicle that is safer than a waymo. Only that on average waymo is much safer.
Maybe it'll end up like riding an elevator / lift where you just push a button and there is almost no risk. Not like a proper Paternoster
https://youtu.be/Ro3Fc_yG3p0?t=16
Grade separated metro public transit is often likened to a horizontal elevator, you step in through some doors, doors close, stuff happens, doors open, you step back out later and you're somewhere else.
A taxi is never going to be quite like that because the acceleration profile is too different and because you have to explicitly summon the taxi, it doesn't just turn up periodically like an elevator or metro.
Also: The paternoster isn't actually dangerous. It just looks scary and it's not as accessible because it won't wait.
> likened to a horizontal elevator, you step in through some doors, doors close, stuff happens, doors open, you step back out later and you're somewhere else.
I like this analogy.
> it doesn't just turn up periodically like an elevator
Aren't elevators summoned by a button and then have their destination set by different button? That seems very similar to tapping in an app to "come get me" and tapping again to "take me there".
It depends. In some tall shared buildings (the sort of place a company might rent a single floor because owning a building is eye-wateringly expensive and unnecessary) the elevators are intelligent and hooked to gate ID, so, if somebody arrives at the gate with an ID for floor 12, the next available elevator picks floor 12 and opens its doors. After all you're not allowed on the other floors so it's pretty obvious where you're going.
Waymo spends a lot on human oversight, where remote operators make a lot of "common sense" decisions that don't require an immediate response and which AI is not yet capable of solving. Humans actually do a lot of the navigation, suggesting paths that the car can drive along. An example I saw was a fire truck parked at an odd angle, poking out into the street, and the software didn't know what it was or what to do. The operator drew a path around the truck for the car to follow. This only works for taxis: it would be impossible for Tesla to do this since there aren't enough human operators to hire.
But I suspect this means Waymo's software is ultimately more risk adverse. If a Tesla stops in the middle of the road then the customer has to take action, which is frustrating and makes the technology look bad, so there is strong incentive to remove that frustration even at the cost of safety. If a Waymo stops, the remote operator has to take action, and the customer can keep staring at their phone without being particularly affected - it just seems like the Waymo is "thinking."
Honestly, this is what a self-driving car should be. No interaction from the passenger required. Maybe eventually we'll be able to replace the human operator, but until then, risk averse AI where somebody remotely solves any unforeseen or unexpected issue is a decent compromise.
1. They've been doing it for ages. They had cars on the street fifteen years ago.
2. They bet on hardware that's not just cameras. Cameras—in practice—are still not the best tool for the job. Cameras see in 2D, they get dirty, they are easily blinded and obscured by dirt, etc.
3. They have data from every Google Street view and mapping car ever deployed. They have the most data and the most current data. Every Tesla on the road would need to be maxing out its LTE connection all the time and they still wouldn't have the breath and quality of data that Google has.
4. Google is throwing money at Waymo. They can see the potential profit if they win. They're not going to get dumped like Cruise.
Any background info on the betting on cameras alone? It sounds as silly as betting on an artificial version of our proprioception to be implemented in cars to measure acceleration. I also don't think they went all the way regarding neuromorphic engineering with spiking neural nets and artificial retinas. It's just so random to me what was decided to be good enough for autonomous navigation.
Tesla went from very expensive cars down to cheaper ones. It would make so much more sense to do the same for perception. First go over board and go for high bandwidth input and lots of processing power and optimize later.
The betting on cameras alone is basically an Elon Musk thing. His reasoning is basically that if humans can do it AI should be able to do it. So far the software isn't really up to it but time will tell. Some stuff - https://www.engineering.com/now-revealed-why-teslas-have-onl...
I used to regularly have to make a left turn onto a rural highway on foggy mornings. Sometimes people drive faster than they should in fog. Sometimes fast enough that by the time they could see I'm in the intersection turning they would be too close to stop.
Cars going fast enough to have that problem made enough sound that they could be heard quite a bit farther away than they could be seen. I'd open my windows at the intersection and listen until I couldn't hear any highway traffic. Then I'd know that any approaching cars are far enough away that I should have time to turn onto the highway and get up to speed before they arrive.
Yeah. Also I don't know how good the Tesla cameras are but my car has a reversing camera and it's ok for going back 2m at 2mph but kind of terrible compared to looking forward through the windscreen.
IIRC I think it’s the section (1:23:25) – Camera vision
The TL;DR is that sensor fusion is really hard, and their bet was that keeping the training pipelines simpler would let them scale faster/easier, and human vision is the existence proof that it can be done without lidar.
One of the big flaws in Karpathy's logic is that it implies human vision is acceptable and sufficient for an AV. The reality, as Cruise found out, seems to be that society will demand AVs are much safer than humans.
Human vision is an existence proof for human-level performance without lidar, but Waymo is an existence proof for 10x human performance WITH lidar. Right now the latter is where the bar is, and it'll keep being raised. I don't think at this point one could get away with deploying AVs at scale that are significantly less safe than Waymo.
Also: if sensor fusion is so hard, why is Waymo able to solve it but not Tesla?
> Also: if sensor fusion is so hard, why is Waymo able to solve it but not Tesla?
I think Karpathy's point is that Tesla wants to try to avoid the "entropy" that comes from adding a sensor (senior software engineers and higher understand this concept). Every sensor (and every version of it -- sensor hardware does get updated) you add requires recalibrating the software stack, the hardware design, which introduces points of failure every time you roll it out.
According to Karpathy, Tesla does use Lidar -- but only at training time, as a source of truth. Once the weights are learned, they operate without the Lidar.
Have a full sensor suite may work for Waymo at the current scale (limited cities), but scaling beyond that poses problems.
Whereas Tesla has to work with a different set of scaling economics -- that of a mass market vehicle already deployed globally.
Not skimping on sensors and having invested much more time and money and compute than their competitors.
They were doing it first by years and have been spending the most on it throughout so not surprising they'd be ahead. Even now they've probably got several times more employees working on autonomy than Tesla
I don’t know, but it’s impressive. I’ve only ridden in one a few times, but the one at night tremendously impressed me. It successfully managed to navigate around various obstacles like people wandering into the street, a parked cop car blocking a lane and then some, etc.
At least in SF the sensor suite on these must cost $$$. Tesla is like 6 cameras. These things have sensor and camera bumps everywhere. Tesla also struggled with route selection especially end of trip - google has very strong mapping and street view info.
Compared to people who drive pretty poorly most of these will probably do better
I call BS on this. The data is biased first and foremost. There’s absolutely no way Waymo or any robotaxi can drive in NYC. In order for robotaxi to be viable it needs have near perfect safety error bands.
They drive in downtown SF, downtown LA and soon Tokyo. I've been to all these cities and besides occasional snow, I don't see what's so special about NYC. Shibuya Crossing is even busier than Times Square.
What's the difference in your eyes that makes these things the absolute best drivers on the road in SF, but not competent to drive in NYC?
The data doesn’t have to be unbiased to be useful. For anyone who’s actually ridden in a Waymo this confirms the intuition of our experience that it’s much less likely to do something stupid than your average human driver.
No activity has perfect safety. That’s not the bar and never has been for new technology.
Does your company accept all legal liability for actions taken by your self-driving car? Then congratulations, you have a self-driving car.
Does your company avoid legal liability for actions taken by your self-driving car? Then congratulations, you do not have a self-driving car.