Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla deploys massive new Autopilot neural net in v9 (electrek.co)
125 points by skuzins on Oct 15, 2018 | hide | past | favorite | 82 comments


Still completely reckless in my opinion to deploy any autonomous system without LIDAR or some other long range, night time capable sensor.

Even if it's not reckless their competitors will offer low light and night time autonomous driving which will be a major advantage.


I do not get why your comment is so controversial - you are absolutely right.

Conventional cameras alone are not trustworthy for self-driving, and this is part of the reason every respectable company venturing into self-driving is incorporation technologies like LIDAR.

I find it sketchy that Tesla markets "future full-self-driving" when they are unlikely to have the hardware to make that a safe experience.


I think it's important to recognize that Tesla is (partially) a technology deflation play (its also a financial engineering play, but that is out of scope for this thread).

Their ability to deliver is a function of the costs of their basic technology supplies (batteries for the Roadster were expensive, less so for the S, X, and energy storage, and now even less so for the 3). This is very similar to the cost decline curve of solar, wind, and storage in the utility energy space (project developers provide bids based on where prices will be in 3-5 years).

Tesla can do this because technology and batteries rapidly decreases in cost every year. Need new autopilot compute hardware? It should be cheaper by the time they need to perform the swap to realize the capability. Need LIDAR? They'll find a way to install it, and the costs should be fairly reasonable per vehicle instead of thousands, or tens of thousands, if they went all in when it was expensive.

This is not unlike a technology startup, where technical debt you're going to pay off in the future is an acceptable tradeoff, so you push off the decision and/or work until the last possible moment (but no further).


ALL of the other OEMs are building autonomous cars using LIDAR so cost isn't a factor.

And relating it to technical debt makes no sense as you have to design with LIDAR in the beginning.


Can you provide a citation to an OEM that is putting LIDAR in vehicles being sold to customers today?


GM/Honda are partnering with Cruise. Fiat are partnering with Waymo.

Everyone is partnering with someone or organising supply deals. And everyone other than Tesla uses LIDAR.

Also I never said these cars were being sold today.


> Also I never said these cars were being sold today.

Then you're moving the goal posts. Tesla is selling cars today to people that they are (EDIT) targeting support of full autonomy, or have their hardware swapped to do so. Everything else is an experiment in a lab or on a track.


> Tesla is selling cars today to people that will eventually support full autonomy, or have their hardware swapped to do so.

Well, Tesla claims that. It's quite possible that neither will happen.


Some of us think that Tesla's technology is an experiment.

An incredibly dangerous one.


Federal regulators do not agree with your assessment.

https://thehill.com/policy/transportation/automobiles/315133...

> Federal regulators "did not identify any defects" in Tesla's autopilot feature after a lengthy investigation of the technology, officials announced Thursday.

> A six-month investigation failed to uncover any flaws with the autopilot's emergency breaking technology and other advanced features linked to a deadly accident last year, according to a National Highway Traffic Safety Administration (NHTSA) report released Thursday.

> "NHTSA’s examination did not identify any defects in design or performance of the AEB or Autopilot systems of the subject vehicles nor any incidents in which the systems did not perform as designed," the NHTSA report said.


Sounds dubious. They seem to be focused on Autopilot working as intended. What if that is far from good enough?


Ever wondered why in millions of years nature didnt evolve a LIDAR equivalent and most species use passive vision?

I think Tesla is bang on the money going only with cameras. They are already better than the human eye.


Ever wondered why in millions of years nature didnt evolve a LIDAR equivalent and most species use passive vision?

Because it compensated for poor sensor hardware with a processing unit and software that we have thus far been unable to even come close to replicating? Granted, solve that, and we’re just a few years away!


The problem with that is that LIDAR is a sensor that is so unlike human perception humans completely misjudge what it's capable of. After an accident humans refuse to believe that the sensor really didn't see it coming, which of course is a big problem later in court. Sonars have the same problem. There's things those sensors can see that humans just can't but in the case of Lidar, only in specific planes (so, for example, it just doesn't see things that "point" at the sensor. It just doesn't see stairs or even an abyss, even at close range). Sonar has similar problems. It sees everything everything everything ... as long as there is a whole lot of consistent nothing surrounding it. When there is structure on the sea floor, sonar is useless near it. When there is a ship on the surface accelerating, the eddy currents create a region around, behind and below the ship where the sonar is blind. And near the surface, sonar is useless. The more wind, the deeper the problem goes. In a storm, it can be 10 meters and more. People with decades of experience for some reason seem to outright refuse to believe that.

People seriously misjudge the limitations of these sensors, and this leads to accidents.

Better to use cameras, which have almost exactly the same issues humans have (e.g. bad vision in low light, limited view, "blind" angles near corners, bad optical performance near the edges, ...), which will lead to "understandable" mistakes.

Nobody will understand if a LIDAR misses a beam sticking out of a truck in front of you (which would be expected behavior: such a thing is essentially invisible to LIDAR) and impales the person sitting in the passenger seat on it.

Or the Tesla fuckup. Failing to see the difference between front and back wheels of a large truck and 2 cars. Then decapitating the driver by driving in between the 2 sets of wheels at high speed. That's a typical LIDAR issue.

Sooner or later LIDAR will decide that just driving off an abyss is the best solution to a simple traffic situation (because LIDARs see abysses everywhere, so they use algorithms that assume abysses don't exist).

I've seen LIDAR controlled robots drive into tables "decapitating" (sort-of) themselves, because it only saw the feet of the table, looking at the data, and coming to the conclusion ... yep ... it was a perfectly understandable mistake. That robot also threw itself off the stairs. Again ... tough to fault it for that, as it saw the stairs as pretty much the same thing as a stick lying on the floor. Afterwards looking at the data, that was a perfectly reasonable conclusion.

We lost the robot to the stairs. I was looking at it making that decision. Why ? You see it move, and you're automatically assuming "surely it's not going to go for the hole in the staircase". And then it decides on a solution. Boom. And yes, I pressed the emergency stop button. Doesn't help much if the robot is already falling.


> Better to use cameras, which have almost exactly the same issues humans have

Surely, he cries, the argument is to use both and simply apply use some kind of likelihoods/confiendence in the output to combine them?


You say this ironically, and yet every indicator points toward this happening in the next 5-10 years!


Ever wondered why with millions of years of evolution, human drivers with depth perception-enabled HDR cameras in their heads still get into accidents all the time? If that's your bar for "good enough" then we're in big trouble with autonomous vehicles.


You need to look at why humans have accidents then.

Hint: it’s not due to the sensor package being used.


> Hint: it’s not due to the sensor package being used.

But it is. The package is pretty limited, and the firmware running it is full of hacks that compensate. Hiding the blind spot, saccade masking, not paying attention to stationary things...


The vast majority of accidents are due to distracted driving. The millions of humans successfully operating a car more or less prove that the problem isn't our sensors. Alternative sensors may help, but marginally.


It at partially is. For example, humans' blind angles get blamed for quite a few accidents. Pointing attention in the wrong direction for another decent batch.


Eagles can see 4-8x further than humans can and many varieties of birds can see in additional spectrums to ours. And of course SONAR exists in whales, dolphins etc which is a corollary.


And yet here we are with limited spectrum stereo vision, at the top of the evolutionary ladder.


He neglects to mention that those sensors eagles use have their own limitations. They have zero peripheral vision. It's like having strong binoculars glued to your eyes.

This has consequences. It takes them forever to find anything, even if they can do it from great distances. They are blind for several seconds when they change position (such as when they just caught prey), or just generally at short range.

And you can try this with cats: this makes it very easy to sneak up on them. No peripheral vision, directional hearing ... if they're focused in front of them, you can almost just walk up to them from the back, they won't notice.

These things are tradeoffs. Humans are prey species, pack hunters. If a human is paying attention you can't really hope to sneak up on them. One human is easy to take down. But 10 humans will defend each other effectively.


There is no such thing as "the top of the evolutionary ladder." There is well fit to an environment and not well fit. To say "top of the ladder" implies that evolution encodes the concept of "progress". It does not. This notion of "top of the ladder" has been used to promote all sorts of pseudo-scientific racism and used to justify all sorts of bad conduct up to and including genocide.

I'm not accusing you of any of this, just pointing out that a seemingly innocuous, almost cliched term like "evolutionary ladder" can carry a lot of unwelcome baggage.


> Eagles can see 4-8x further than humans can and many varieties of birds can see in additional spectrums to ours.

So can digital cameras.

> And of course SONAR exists in whales, dolphins etc which is a corollary.

The reason for (active) SONAR is the lack of ambient sound. An alternative for a car would be headlights. Or passive infrared cameras.


So does Tesla have cameras that see 4-8x better than a human then? Or do they use potato cameras that are useless at night like Uber did?


Tesla's have 12 camera's I believe. So ... yes, it does.


humans also didn't evolve wheels and the ability to run at 200 km / h.


No, but cheetahs did at the speeds vehicles would be operating in autonomous mode, and various birds fly even faster.


Even at night? And in snow?


So basically you simple predefined what technology makes people 'respectable' and and what not. Seems to me that you are just assuming knowledge that actually nobody has.


They aren't assuming. If you aren't radiating and measuring the return, you're at a massive disadvantage in terms of your sensor package. It's s bit like trusting a computer to drive at 70 mph on a pitch black stretch of highway it has never seem before. Enjoy overdriving your headlights and hitting a deer or something else.

Tesla is cutting corners by eschewing a well-known technology that will increase the fidelity of their autonomous driving system. That is irresponsible.


Its only irresponsible if they actually sell anything that doesn't work. So far what the offer is fine for what it is. People who don't want to wait can get their money back at any time.


If you define working as "encouraging drivers to drive in a more disengaged state, while at the same time adding additional sources of hazard or malfunction for both the disengaged driver, and other motorists to be alert for", then I got nothing that'll seemingly convince you, as it appears we'd be arguing right past each other.

Do. No. Harm. It isn't just for Doctors. You can't sit on the positivist side of the fence and say, "Bah, it'll never happen."

It always does. When the cost is lives, you don't mess around or skimp. Closer to Perfection over pretty good is 100% justifiable in safety critical applications. In many cases, critical systems are redundantly reinforced.

Look up risk compensation to understand why a half-baked solution is almost guaranteed to be worse. If you are still relying on the human to compensate for failures in a system in an environment where response time to live is measured in seconds, you might as well just have them being alert and engaged at all times with as few things to have to compensate for malfunctioning as possible.

Another somewhat tangential demonstration of this is observable in nuclear reactor design, and utilising delayed fission product neutrons to attain criticality. This slows down the timescale over which things can go horribly wrong to the scale of minutes instead of seconds that would make avoiding prompt criticality highly problematic.


Teslas have a long-range radar on the front.


Yes, that's very useful in day to day driving, still not as good as lidar, radar doesn't have enough resolution.


The original forum post this article is about has better info IMO:

https://teslamotorsclub.com/tmc/threads/neural-networks.1014...


Whoa back up - he claims it was built off Inception V1? Truly impressive if that's the case...


Care to expand on that?


Right from the end of that forum post:

>Inception V1 is a four year old architecture that Tesla is scaling to a degree that I imagine inceptions’s creators could not have expected. Indeed, I would guess that four years ago most people in the field would not have expected that scaling would work this well. Scaling computational power, training data, and industrial resources plays to Tesla’s strengths and involves less uncertainty than potentially more powerful but less mature techniques.


”When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000.”

I find it very, very hard to believe that. I know ‘expressive power’ is a fairly vague concept, but if things scale that well, there must be papers out there that at least hint at such (IMHO) insane scaling laws.

I think it also must mean that it is fairly easy for those with huge budgets to build a system that’s way better, except for the fact that it is too slow or takes too much power (just as 3D graphics in movies show what will be on our desks/phones in a decade or two)

I’ve asked it before, but does anybody know of papers that describe an offline self-driving system that’s as good as perfect?


It seems that there is stronger evidence that model architecture is more important than sheer number of parameters/computational complexity. Compare VGG-16 with something like MobileNet or ResNet-18.


Here is a highly specific definition of "expressive power" https://en.m.wikipedia.org/wiki/VC_dimension

To me the statement doesn't seem that unreasonable


Thanks for bringing this to my attenton. In that article, however, it says for neural nets:

V is the set of nodes. Each node is a simple computation cell. E is the set of edges, Each edge has a weight. .... If the activation function is the sigmoid function and the weights are general, then the VC dimension is ... at most O(|E|^2.|V|^2) [apologies for the crappy formatting]

While Someone's quote from the article seems to be suggesting something exponential in the number of edges.


I haven’t even tried to hunt down the book referenced on Wikipedia, but I think it’s worse than that. The Wikipedia page says ”The VC dimension of a neural network is bounded as follows”. “Is bounded” is an expression in mathematics that is more about what we know about a problem, than about the problem itself (as a classical example, see https://en.wikipedia.org/wiki/Graham's_number#Context. Graham’s number ‘bounds’ a number whose value we know to be at least 13)

Given the huge range between that upper bound and the lower bound of Ω(|E|²), chances are that upper bound is far from tight (https://en.wikipedia.org/wiki/Upper_and_lower_bounds#Tight_b...).

Also, one line below the O(|E|².|V|²) you quoted:

”If the weights come from a finite family (e.g. the weights are real numbers that can be represented by at most 32 bits in a computer), then, for both activation functions, the VC dimension is at most O(|E|)”

Of course, they may use a different activation function, in which case that mathematical statement doesn’t apply, but I would think it’s more unlikely that applies than the claim made on the article we’re discussing.

For example, it would hugely surprise me if using an activation function that isn’t increasing or that has many large discontinuities behaves a lot better than the sigmoid surely used.


Good point


Strongly agreed. I'd love to see a paper a backing up this claim but I'm pretty sure it's just wrong.

Likewise, I would be quite surprised if Tesla is really pushing state of the art for the size of their vision models with what they've deployed in cars. Researchers have built some pretty big models...


I wish that articles like this would make it clear which features apply to which cars. It would be nice to know if there are any improvements to cars with Autopilot version 1 for instance.


Not a single pedestrian in the whole video.


This is the blog/site, where editor got Tesla Roadster through affiliate points. So, take it into account when you read articles about otherwise fantastic Tesla cars.


Good on them for working hard to get affiliate points for a brand new Tesla. It took them working their butts off to get one.

I found the article to be full of unbiased information and also more personal info in the "ELECTREK’S TAKE" section.


Good on you that you found value in the article, but I found it to be biased in favor of Tesla, and also find it sleazy that they didn't disclose their affiliation to Tesla.


Their participation in the affiliate scheme is very often mentioned in the articles.


But certainly not in this one.


Every Tesla owner is in the referral program. If you get something like 20 referrals at various times you could get a car. These guys own a Tesla, I think two of them between them, and so could eventually earn a car. Even I am a craven person who get something if you use my referral code to buy a tesla. You get $100 a year for at least 4 years in supercharger credits (that's like $400 a year in a regular gas car cause EVs get the equivalent of around 100 miles a gallon in gas :-)).

I get virtually nothing unless I get 20 of them - not sure of the current program but so far my life time total referrals is: 0. If you have questions and want to ask someone who has owned two different ones since 2012 ask me questions. If you want to buy a tesla, I have a referral link too, contact me via email on my account here, maybe hackernews wouldn't like me to just put it out here.


>I found it to be biased in favor of Tesla

When someone makes a broad critical remark about an article, but provides no specifics, I tend to assume it is because they know the article was in fact accurate, but don't want to admit it.


That's great, but when are they going to properly address and mitigate the potentially massively lethal cyber- and national-security issues made possible by the tech?

Or do we have to talk only about what they want us to talk?


Great, let's evaluate the deaths caused by regular cars. Lets evaluate risks posed by burning fossil fuels. Let's evaluate pretty much every single little detail about other auto brands and present our findings in a fashion where we can compare and rank best to worst. Having just done this I'm completely satisfied with Tesla's approach in comparison to what other brands have been doing.


Like what? Other car manufactures already do remote software updates.


People's safety isn't something to be iterated.

Any security patch will already come too late -- when it comes to cars, it must not have security problems to begin with.

The cybersecurity model just isn't safe for this. It's up to them to fix this, not me.


> People's safety isn't something to be iterated.

Look into the history of automotive safety to understand how ridiculous this sounds. Cars were total death traps for many, many decades.


"It must not have security problems to begin with"

No product is bulletproof from a security standpoint, and there's no way to make something bulletproof. What are you suggesting they do?


Tesla absolutely launched their over-the-air capabilities without sufficient planning. If you look at the early Keen Labs presentations[0] it's insane that it was launched the way it was. At least they added code signing later...

[0] https://www.blackhat.com/docs/us-17/thursday/us-17-Nie-Free-...


They did OTAs without code signing? That's an unprecedented level of incompetency! What were they thinking?


Everything has potential to cause issues, won't stop progress. Let's not act like they are ignoring this.


Do users have the ability to stop updates without repercussions?

Is it really a choice that cannot be overridden by the update?

If X declares war on California, will California shutdown every Tesla in X?

Will I be allowed to export my Tesla in 15 years to whatever developing country I like?


What?

Did you think before you typed because 2 of those questions have nothing to do with this.


Will my car be taken over and kill me?

Will other people's cars be taken over and kill me?

Will my car be taken over and be used to drive me to someone who will harm me?

These are life or death concerns and let's not act like they're not ignoring this.


1. No it wont, please show me where this is happening. 2. No it wont, please show me where this is happening. 3. No it wont, please show me where this is happening.

There's concerns and then crazy questions and comments.


So no one should try to prevent security flaws because that particular flaw hasn't happened and only fix them when they do?

The question shouldn't be "has it happened." The question should be how realistic is the chance of this happening in the future?


As if technology has never been held hostage before until a ransom is paid.

#1 is easy: “pay us or we will accelerate your car in a random direction in 24h”. The first round will be scareware, the future may not be.


#1 is easy: “pay us or we will accelerate your car in a random direction in 24h”. The first round will be scareware, the future may not be.

Can't happen in a tesla, “so even if somebody would gain access to the car, they cannot gain access to the powertrain or to the braking system.”

And how is this any different than we'll kill you in 24h if you dont pay?


How can that be? You can remotely update the system that does autopilot, and autopilot must have control over throttle/brake/steering.

Even shutting off the lights or disabling the wipers can create big problems.


Systems are built in isolation, communication between them is encrypted.

Disabling wipers or turning off the lights will kill us now? Not forgetting this hasn't happened, hasn't been proved to be possible and is all a theory.

Theories are fine but acting as if tesla is activity ignoring this is funny.


Systems being isolated and containing encrypted communication does not mean that there are no vulnerabilities. Otherwise, TLS would be all that’s needed to secure a website.


They’re clearly not in isolation. We’re not talking about storing data, but sending it and receiving it, so if the messages are encrypted, you just need to feed the code that is doing the encrypting and voila.

Disabling wipers and all lights while driving at high speed with the wipers running full blast can be quickly fatal.


If they were we wouldn't be debating this.

Sorry you are wrong.


Not sure which point you were referring to by "they", but we must have different definitions of "isolation".

If one system absolutely requires communication with the other to function, I would never call them isolated.


That wasn't a question nor does anyone think you shouldn't try to prevent security flaws. The fact is it wont happen because of the way these systems are built within the car.

Security flaws will always exist but there are limits and just blatantly claiming we're all gonna die because tesla doesn't care about security is a joke.


Everything has potential to cause issues, and that's not reason to dismiss life or death concerns.

What are they doing about it that's effective? They are ignoring this.


No one is dismissing life or death concerns they are dismissing your comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: