I do not get why your comment is so controversial - you are absolutely right.
Conventional cameras alone are not trustworthy for self-driving, and this is part of the reason every respectable company venturing into self-driving is incorporation technologies like LIDAR.
I find it sketchy that Tesla markets "future full-self-driving" when they are unlikely to have the hardware to make that a safe experience.
I think it's important to recognize that Tesla is (partially) a technology deflation play (its also a financial engineering play, but that is out of scope for this thread).
Their ability to deliver is a function of the costs of their basic technology supplies (batteries for the Roadster were expensive, less so for the S, X, and energy storage, and now even less so for the 3). This is very similar to the cost decline curve of solar, wind, and storage in the utility energy space (project developers provide bids based on where prices will be in 3-5 years).
Tesla can do this because technology and batteries rapidly decreases in cost every year. Need new autopilot compute hardware? It should be cheaper by the time they need to perform the swap to realize the capability. Need LIDAR? They'll find a way to install it, and the costs should be fairly reasonable per vehicle instead of thousands, or tens of thousands, if they went all in when it was expensive.
This is not unlike a technology startup, where technical debt you're going to pay off in the future is an acceptable tradeoff, so you push off the decision and/or work until the last possible moment (but no further).
> Also I never said these cars were being sold today.
Then you're moving the goal posts. Tesla is selling cars today to people that they are (EDIT) targeting support of full autonomy, or have their hardware swapped to do so. Everything else is an experiment in a lab or on a track.
> Federal regulators "did not identify any defects" in Tesla's autopilot feature after a lengthy investigation of the technology, officials announced Thursday.
> A six-month investigation failed to uncover any flaws with the autopilot's emergency breaking technology and other advanced features linked to a deadly accident last year, according to a National Highway Traffic Safety Administration (NHTSA) report released Thursday.
> "NHTSA’s examination did not identify any defects in design or performance of the AEB or Autopilot systems of the subject vehicles nor any incidents in which the systems did not perform as designed," the NHTSA report said.
Ever wondered why in millions of years nature didnt evolve a LIDAR equivalent and most species use passive vision?
Because it compensated for poor sensor hardware with a processing unit and software that we have thus far been unable to even come close to replicating? Granted, solve that, and we’re just a few years away!
The problem with that is that LIDAR is a sensor that is so unlike human perception humans completely misjudge what it's capable of. After an accident humans refuse to believe that the sensor really didn't see it coming, which of course is a big problem later in court. Sonars have the same problem. There's things those sensors can see that humans just can't but in the case of Lidar, only in specific planes (so, for example, it just doesn't see things that "point" at the sensor. It just doesn't see stairs or even an abyss, even at close range). Sonar has similar problems. It sees everything everything everything ... as long as there is a whole lot of consistent nothing surrounding it. When there is structure on the sea floor, sonar is useless near it. When there is a ship on the surface accelerating, the eddy currents create a region around, behind and below the ship where the sonar is blind. And near the surface, sonar is useless. The more wind, the deeper the problem goes. In a storm, it can be 10 meters and more. People with decades of experience for some reason seem to outright refuse to believe that.
People seriously misjudge the limitations of these sensors, and this leads to accidents.
Better to use cameras, which have almost exactly the same issues humans have (e.g. bad vision in low light, limited view, "blind" angles near corners, bad optical performance near the edges, ...), which will lead to "understandable" mistakes.
Nobody will understand if a LIDAR misses a beam sticking out of a truck in front of you (which would be expected behavior: such a thing is essentially invisible to LIDAR) and impales the person sitting in the passenger seat on it.
Or the Tesla fuckup. Failing to see the difference between front and back wheels of a large truck and 2 cars. Then decapitating the driver by driving in between the 2 sets of wheels at high speed. That's a typical LIDAR issue.
Sooner or later LIDAR will decide that just driving off an abyss is the best solution to a simple traffic situation (because LIDARs see abysses everywhere, so they use algorithms that assume abysses don't exist).
I've seen LIDAR controlled robots drive into tables "decapitating" (sort-of) themselves, because it only saw the feet of the table, looking at the data, and coming to the conclusion ... yep ... it was a perfectly understandable mistake. That robot also threw itself off the stairs. Again ... tough to fault it for that, as it saw the stairs as pretty much the same thing as a stick lying on the floor. Afterwards looking at the data, that was a perfectly reasonable conclusion.
We lost the robot to the stairs. I was looking at it making that decision. Why ? You see it move, and you're automatically assuming "surely it's not going to go for the hole in the staircase". And then it decides on a solution. Boom. And yes, I pressed the emergency stop button. Doesn't help much if the robot is already falling.
Ever wondered why with millions of years of evolution, human drivers with depth perception-enabled HDR cameras in their heads still get into accidents all the time? If that's your bar for "good enough" then we're in big trouble with autonomous vehicles.
> Hint: it’s not due to the sensor package being used.
But it is. The package is pretty limited, and the firmware running it is full of hacks that compensate. Hiding the blind spot, saccade masking, not paying attention to stationary things...
The vast majority of accidents are due to distracted driving. The millions of humans successfully operating a car more or less prove that the problem isn't our sensors. Alternative sensors may help, but marginally.
It at partially is. For example, humans' blind angles get blamed for quite a few accidents. Pointing attention in the wrong direction for another decent batch.
Eagles can see 4-8x further than humans can and many varieties of birds can see in additional spectrums to ours. And of course SONAR exists in whales, dolphins etc which is a corollary.
He neglects to mention that those sensors eagles use have their own limitations. They have zero peripheral vision. It's like having strong binoculars glued to your eyes.
This has consequences. It takes them forever to find anything, even if they can do it from great distances. They are blind for several seconds when they change position (such as when they just caught prey), or just generally at short range.
And you can try this with cats: this makes it very easy to sneak up on them. No peripheral vision, directional hearing ... if they're focused in front of them, you can almost just walk up to them from the back, they won't notice.
These things are tradeoffs. Humans are prey species, pack hunters. If a human is paying attention you can't really hope to sneak up on them. One human is easy to take down. But 10 humans will defend each other effectively.
There is no such thing as "the top of the evolutionary ladder." There is well fit to an environment and not well fit. To say "top of the ladder" implies that evolution encodes the concept of "progress". It does not. This notion of "top of the ladder" has been used to promote all sorts of pseudo-scientific racism and used to justify all sorts of bad conduct up to and including genocide.
I'm not accusing you of any of this, just pointing out that a seemingly innocuous, almost cliched term like "evolutionary ladder" can carry a lot of unwelcome baggage.
So basically you simple predefined what technology makes people 'respectable' and and what not. Seems to me that you are just assuming knowledge that actually nobody has.
They aren't assuming. If you aren't radiating and measuring the return, you're at a massive disadvantage in terms of your sensor package. It's s bit like trusting a computer to drive at 70 mph on a pitch black stretch of highway it has never seem before. Enjoy overdriving your headlights and hitting a deer or something else.
Tesla is cutting corners by eschewing a well-known technology that will increase the fidelity of their autonomous driving system. That is irresponsible.
Its only irresponsible if they actually sell anything that doesn't work. So far what the offer is fine for what it is. People who don't want to wait can get their money back at any time.
If you define working as "encouraging drivers to drive in a more disengaged state, while at the same time adding additional sources of hazard or malfunction for both the disengaged driver, and other motorists to be alert for", then I got nothing that'll seemingly convince you, as it appears we'd be arguing right past each other.
Do. No. Harm. It isn't just for Doctors. You can't sit on the positivist side of the fence and say, "Bah, it'll never happen."
It always does. When the cost is lives, you don't mess around or skimp. Closer to Perfection over pretty good is 100% justifiable in safety critical applications. In many cases, critical systems are redundantly reinforced.
Look up risk compensation to understand why a half-baked solution is almost guaranteed to be worse. If you are still relying on the human to compensate for failures in a system in an environment where response time to live is measured in seconds, you might as well just have them being alert and engaged at all times with as few things to have to compensate for malfunctioning as possible.
Another somewhat tangential demonstration of this is observable in nuclear reactor design, and utilising delayed fission product neutrons to attain criticality. This slows down the timescale over which things can go horribly wrong to the scale of minutes instead of seconds that would make avoiding prompt criticality highly problematic.
>Inception V1 is a four year old architecture that Tesla is scaling to a degree that I imagine inceptions’s creators could not have expected. Indeed, I would guess that four years ago most people in the field would not have expected that scaling would work this well. Scaling computational power, training data, and industrial resources plays to Tesla’s strengths and involves less uncertainty than potentially more powerful but less mature techniques.
”When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000.”
I find it very, very hard to believe that. I know ‘expressive power’ is a fairly vague concept, but if things scale that well, there must be papers out there that at least hint at such (IMHO) insane scaling laws.
I think it also must mean that it is fairly easy for those with huge budgets to build a system that’s way better, except for the fact that it is too slow or takes too much power (just as 3D graphics in movies show what will be on our desks/phones in a decade or two)
I’ve asked it before, but does anybody know of papers that describe an offline self-driving system that’s as good as perfect?
It seems that there is stronger evidence that model architecture is more important than sheer number of parameters/computational complexity. Compare VGG-16 with something like MobileNet or ResNet-18.
Thanks for bringing this to my attenton. In that article, however, it says for neural nets:
V is the set of nodes. Each node is a simple computation cell.
E is the set of edges, Each edge has a weight.
....
If the activation function is the sigmoid function and the weights are general, then the VC dimension is ... at most O(|E|^2.|V|^2) [apologies for the crappy formatting]
While Someone's quote from the article seems to be suggesting something exponential in the number of edges.
I haven’t even tried to hunt down the book referenced on Wikipedia, but I think it’s worse than that. The Wikipedia page says ”The VC dimension of a neural network is bounded as follows”. “Is bounded” is an expression in mathematics that is more about what we know about a problem, than about the problem itself (as a classical example, see https://en.wikipedia.org/wiki/Graham's_number#Context. Graham’s number ‘bounds’ a number whose value we know to be at least 13)
”If the weights come from a finite family (e.g. the weights are real numbers that can be represented by at most 32 bits in a computer), then, for both activation functions, the VC dimension is at most O(|E|)”
Of course, they may use a different activation function, in which case that mathematical statement doesn’t apply, but I would think it’s more unlikely that applies than the claim made on the article we’re discussing.
For example, it would hugely surprise me if using an activation function that isn’t increasing or that has many large discontinuities behaves a lot better than the sigmoid surely used.
Strongly agreed. I'd love to see a paper a backing up this claim but I'm pretty sure it's just wrong.
Likewise, I would be quite surprised if Tesla is really pushing state of the art for the size of their vision models with what they've deployed in cars. Researchers have built some pretty big models...
I wish that articles like this would make it clear which features apply to which cars. It would be nice to know if there are any improvements to cars with Autopilot version 1 for instance.
This is the blog/site, where editor got Tesla Roadster through affiliate points. So, take it into account when you read articles about otherwise fantastic Tesla cars.
Good on you that you found value in the article, but I found it to be biased in favor of Tesla, and also find it sleazy that they didn't disclose their affiliation to Tesla.
Every Tesla owner is in the referral program. If you get something like 20 referrals at various times you could get a car. These guys own a Tesla, I think two of them between them, and so could eventually earn a car. Even I am a craven person who get something if you use my referral code to buy a tesla. You get $100 a year for at least 4 years in supercharger credits (that's like $400 a year in a regular gas car cause EVs get the equivalent of around 100 miles a gallon in gas :-)).
I get virtually nothing unless I get 20 of them - not sure of the current program but so far my life time total referrals is: 0. If you have questions and want to ask someone who has owned two different ones since 2012 ask me questions. If you want to buy a tesla, I have a referral link too, contact me via email on my account here, maybe hackernews wouldn't like me to just put it out here.
When someone makes a broad critical remark about an article, but provides no specifics, I tend to assume it is because they know the article was in fact accurate, but don't want to admit it.
That's great, but when are they going to properly address and mitigate the potentially massively lethal cyber- and national-security issues made possible by the tech?
Or do we have to talk only about what they want us to talk?
Great, let's evaluate the deaths caused by regular cars. Lets evaluate risks posed by burning fossil fuels. Let's evaluate pretty much every single little detail about other auto brands and present our findings in a fashion where we can compare and rank best to worst. Having just done this I'm completely satisfied with Tesla's approach in comparison to what other brands have been doing.
Tesla absolutely launched their over-the-air capabilities without sufficient planning. If you look at the early Keen Labs presentations[0] it's insane that it was launched the way it was. At least they added code signing later...
1. No it wont, please show me where this is happening.
2. No it wont, please show me where this is happening.
3. No it wont, please show me where this is happening.
There's concerns and then crazy questions and comments.
Systems are built in isolation, communication between them is encrypted.
Disabling wipers or turning off the lights will kill us now? Not forgetting this hasn't happened, hasn't been proved to be possible and is all a theory.
Theories are fine but acting as if tesla is activity ignoring this is funny.
Systems being isolated and containing encrypted communication does not mean that there are no vulnerabilities. Otherwise, TLS would be all that’s needed to secure a website.
They’re clearly not in isolation. We’re not talking about storing data, but sending it and receiving it, so if the messages are encrypted, you just need to feed the code that is doing the encrypting and voila.
Disabling wipers and all lights while driving at high speed with the wipers running full blast can be quickly fatal.
That wasn't a question nor does anyone think you shouldn't try to prevent security flaws. The fact is it wont happen because of the way these systems are built within the car.
Security flaws will always exist but there are limits and just blatantly claiming we're all gonna die because tesla doesn't care about security is a joke.
Even if it's not reckless their competitors will offer low light and night time autonomous driving which will be a major advantage.