Personally, I find it astonishing that the manufacturer hasn't caught this.
Here's my (probably) unpopular opinion: In the very near future, you might have to become 'certified' as a programmer for anything that touches the internet EDIT: AND has the possibility of killing someone.
Much in the same way that a regular Joe can't build a bridge over a river without a few Civil Engineering certifications, a programmer can't make an internet-connected program that touches the internet and also controls a 2 ton moving vehicle.
There are rookie mistakes, and there are systematic failures; a LOT of these recent issues seem to be systematic in nature. The easiest fix in that case is to fix the system.
Certifying the people only really allows for punishing the people responsible for not anticipating problems in their systems. To make the certification worth anything you'll need a set of guidelines against which these kinds of systems should be engineered. Formally verifying the system as a whole against those guidelines seems like a far more effective approach, as it (theoretically) results in the system itself being verified as safe.
When Ford released the Pinto, they determined the cost of simply paying out potential lawsuits for driver deaths would be less than the cost of actually fixing the cars' tendency to explode when rear-ended, so they just let people die until the numbers balanced out[0].
Granted, that was a hardware issue and not a software issue, but if the risk of a class-action lawsuit is involved, and that risk is magnified by a recall or public awareness of the problem, then the motivation may be the same.
>In the very near future, you might have to become 'certified' as a programmer for anything that touches the internet.
I think a better option would be to mandate that certain things can't touch the internet. It would be absurd if I, someone who writes PHP/SQL and C# crud apps for a living, had to be certified as a systems programmer (or the other way around.)
The problem to me is the insidious way we as a society have sort of collectively accepted that connecting things to the internet is inevitable and always a good idea. Instead of making sure the engineers who connect the car's brakes and steering to the internet through the radio are competent, how about not making that decision to begin with?
The Pinto analogy is dangerous. The flaw in the Pinto was that when it got into a certain, common type of collision there was a higher-then-normal chance that its fuel tank might explode.
The risk was NOT that because of this vulnerability, there was a chance that bad guys might go around ramming into Pintos to make them explode. Who would even do that? Or maybe that if you rammed a knitting needle into the back of a Pinto it would trigger a fuel leak. Well so what? Probably similar issues exist in any number of cars, but it's not something people would actually do (and if they did it would obviously be criminal), so... that's not really an engineering flaw.
So the insecurity of infotainment systems in cars is NOT like the Pinto issue. Nobody is suggesting that when the infotainment system plays a Mariah Carey song it will, because of a bug, shut off the brakes. Or that during a common kind of collision, the infotainment system causes the airbags to fail to fire. No, this is only a risk if someone actively attacks the infotainment system and deliberately shuts off the brakes, or triggers an acceleration, or whatever.
There are already ways to physically sabotage a car to make it dangerous to drive - cut the brake lines, puncture the fuel tank, whatever. Those are, to a certain way of thinking, security vulnerabilities. If you're someone who is concerned about people trying to kill you, you probably take extra precautions to secure your car to mitigate those risks. Ordinary people just park their car on the street and assume assholes are not going to trigger a slow leak in their brake fluid.
So why should ordinary people be concerned that someone might install a trojan in their infotainment system which is going to shut off their brakes on the highway?
The chance may be low, but I think people should be concerned because it exists at all. It doesn't have to be possible to install a trojan that shuts off a car's brakes on the highway - that possibility is the result of engineering choices and compromises which could have, and still could, be avoided. It doesn't have to be an acceptable level of risk for drivers to take on themselves, either. That there's little chance of someone taking advantage of it is irrelevant.
My point is that we're used to thinking of computer security as being an absolute necessity because, for example, it is of value to a bad guy to simply take control of a network-connected computer to become part of a botnet. Or to use it to obtain valuable personal data which passes through the computer. But I think that leads us to overestimate the risk created by the possibility that hacker activity could extend to the physical realm, because if a hacker crosses that line, it has consequences.
My car's infotainment system doesn't know my credit card number. It probably makes a pretty poor botnet member (but even if someone does hack it and turn it to churning out spam they're not going to be trying to get to the CAN bus). And there's little value to a badguy in just breaking my car for no reason. If someone just wants to break cars and hurt people, they can go throw rocks off an overpass. People do do that, admittedly, but they're not revealing an engineering flaw in cars by doing so.
If you can use this hack to pop the door locks and override the ignition, then maybe we should be talking.
You can build a bridge over a river if you own the property it traverses.
The civil engineering certifications are only to absolve you of personal liability if any person crossing the bridge is injured due to its failure. That is sort of important, but the same could also be achieved by the owner posting it against trespass, and requiring that any authorized users sign a liability waiver beforehand.
If you own the rights to code that ends up causing injury due to a known flaw in it, you're liable for the harm. That's all there is to it.
The certification-based "out" only applies if the auto manufacturer hired exclusively certified software professionals and allowed them to do their jobs without coercing them to cut corners. This is something that will clearly not happen until well after the first huge damage award for software malpractice occurs.
I am emphatically against any prior licensing restraint on individuals seeking to enter or innovate in this industry. That's just protectionist bullshit. It doesn't make the code better. A legal regime that rewards following software best practices by limiting product liability, on the other hand, that seems rather useful.
NCEES has introduced a software engineering PE exam.[1] Not many people are taking it yet (18 total test takers for April 2015) [2] but that isn't surprising since it's brand new, many "software engineers" don't qualify for licensure, and only a bit more than half the states even license for software PEs. Personally, I don't see how much of a difference licensure will make for safety-critical projects given that regulated industries are already, well, regulated. Nevertheless, forcing someone to take professional and personal liability for their work, as well as establishing a common body of knowledge, both strike me as good in general for safety-critical work.
Respectfully, I think programmer licensure is an oversimplification of the problem. Far more goes into the design of software systems than just the software. Product Management often wants things done a certain way. Hardware guys want things done their way.
Take this hacking example. Couldn't you imagine a situation in which, a licensed software engineer is pleading with PM to separate the entertainment system, from the controls? But, PM pushes back, and wins, on the account of cost savings?
> Take this hacking example. Couldn't you imagine a situation in which, a licensed software engineer is pleading with PM to separate the entertainment system, from the controls? But, PM pushes back, and wins, on the account of cost savings?
Then the engineer, being professionally and personally liable for their work product, notes that they are ethically obligated to "hold paramount the safety, health, and welfare of the public" and that "if engineers' judgment is overruled under circumstances that endanger life or property, they shall notify their employer or client and such other authority as may be appropriate."[1]
> I have no trouble requiring a certification for developing software that runs on systems upon which lives depend, but... the internet?
Yeah, you're right. I suppose I was a bit overzealous with my first comment. Systems that can cause immediate harm should require the certifications as I mentoned.
Internet in general would be very difficult to manage.
These stories probably need to be reframed as (e.g.) "Infotainment Flaw Allows Complete Vehicle Takeover", rather than being hacker centric. It may be difficult, ego-wise, to step out of the spotlight (like the giddy hackers in the wired video [1]), but it's necessary.
Personally I would blame both. The manufacturers should be criminally liable if anyone gets injured and required to do a recall for defects. Security research should be legal and encouraged. However anyone that maliciously uses a defect to injure someone (or attempt to) should be treated as if they had used a physical weapon.
> Former U.S. National Coordinator for Security, Infrastructure Protection, and Counter-terrorism Richard A. Clarke said that what is known about the crash is "consistent with a car cyber attack".
Sadly, accidents are probably going to happen before infotainment gets decoupled from CAN bus. You don't just need a firewall, those need to be on physically separate networks.
By 'accidents', do you mean 'deliberate acts of sabotage'?
Or are you thinking that just a bug in the infotainment system might cause it to shut off someone's brakes when you play a song which contains a null byte in the wrong place?
No. It's de facto impossible. If all I had to worry about was the software alone I would despair of ever correcting this in a reasonable period of time; to also have to overcome fundamental hardware challenges? Not a chance.
For a long time I've wondered what would prove to be Security-Pocalypse that finally convinces everybody that this is a real problem. This is not yet it. But if somebody takes one of these attacks and gets... "creative"... in those ways that aren't really that hard to come up with but I hate to actually spell out online (it's scary to think too hard about this... there's no possible way these vulns could be closed before a bad actor could... be very bad...)... that could become the moment the 21st century finally realized that secure code is no longer optional.
It is scary to think about because automated exploits could perhaps be scaled up and cause real mayhem. Hopefully it doesn't have to get that far before some elementary redesign of network architecture inside of vehicles. It's not rocket science. Just separate the safety critical network (which include brakes, steering, etc) from the non critical information and entertainment network.
The auto industry will very quickly have to learn from the aviation industry in terms of integrating all these diverse information systems safely and securely. One difference is that cars are way more accessible than planes to be tampered with and hence the security systems will most likely have to be seriously hardened.
EDIT: please take "integrate" to mean to properly place each system on its own space and then surface the necessary APIs in a secure fashion. Of course physical separation of entertainment and car systems is the logical step, and it comes from the aviation industry.
There is no justifiable reason for these things to be integrated at all. Yes, it saves the mfr some $$, but people can DIE because of this stupidity.
As recent events have shown, the aviation industry hasn't integrated these systems securely - they're just harder to get to so it took a while before people started discovering their flaws - http://www.cnn.com/2015/05/17/us/fbi-hacker-flight-computer-...
Secure integration is not possible - the only right way for companies as inept as legacy auto makers to do this is for the systems to be separated.
Unfortunately the aviation industry has apparently been making the same mistake.
It has been obvious for at least a decade, to anyone paying attention, that if you create a path between a critical system and untrusted systems, you will not, in practice, be able to prove that the former is secure, nor anticipate all the sorts of attack it could be subject to. Why, then, does this mistake continue to be repeated, year after year? I imagine the explanation must include large doses of both ignorance and hubris - the Dunning-Kruger effect with teeth.
"NCC's work - which has been restricted to its labs..."
"Mr Davis said he had simulated his DAB-based attack only on equipment in his company's buildings..."
"But he added that he had previously compromised a real vehicle's automatic-braking system ... by modifying an infotainment system, and he believed this could be replicated via a DAB broadcast."
So basically this is one of those non-stories where a researcher looking for some press coverage broadcasts a custom radio station with a name like 'LOL I HAKD UR CAR', then shows it - in an office - to a zero-knowledge reporter who is immediately impressed as if it was a demonstration of anything at all. Whereas actually it's just some bloke in an office saying "well I'm sure this is possible, I've never done it, obviously."
I'm reminded of the words of Travis Goodspeed... Proof Of Concept or GTFO.
I'm tired of articles that explain absolutely nothing about the actual methodology of the hack.
Does Jeep Cherokee have separate buses for high-speed controller and low-speed infotainment CAN? (That's the usual setup.) What controller acts as the gateway between them? Is it read-write (which would be incredibly dumb) or did they actually hack that controller after hacking infotainment?
I had shown similar articles/research papers to my friends working in automotive. They pointed out several factual mistakes and said it's generic fluff with no meaningful technical details. (See the questions above.)
If realistically exploitable, these vulnerabilities are serious stuff. Hackers/journalists need to get their shit together and communicate these findings in an appropriate way. 99.9% of automotive engineers do not go to Black Hat conference.
I saw it. It doesn't answer any of the questions above. The fact that they were able to hack an infotainment system connected to the internet is not surprising. Anyone who knows how those things are written sort of assumed that this was possible.
The noteworthy part of this is being able to bridge the gap between the two CAN networks, and it's not explained in any way.
The thing is that your questions aren't really very applicable. For example if there is a high speed and low speed CAN bus or just one really does not matter cause of details about how it works. Also that applies to the question about read/write/modify - that's not really how CAN bus works at all, it's more like commands and there are some nifty ways to filter, like codes needed. For yesterday's article it boils down to modules not doing careful enough verification and likely a really boneheaded in hindsight mistake in the Uconnect software. The talks will be great resources for a lot of people, I'm eagerly awaiting them.
For example if there is a high speed and low speed CAN bus or just one really does not matter cause of details about how it works.
It matters a lot. Typically, there is a gateway (in body or chassis controller) between the two (or three) buses. It puts some messages from the controller bus onto infotainment bus, but not the other way around. It's done specifically to prevent infotainment systems interfering with workings of the car.
If Jeep has everything running on the same bus, it's incredibly stupid.
That is a bit over simplified. In practice more is passed, but for example the ECU ignores based on rules like RPM above X, TCM not in P, and so on. There is pressure to move to a single high speed bus and modules that pass messages based on priority and time.
> The UK's Society of Motor Manufacturers and Traders has responded by saying that car companies "invest billions of pounds to keep vehicles secure as possible".
Well they haven't done a particularly good job.
Anyone else spooked that this has come out of a security company based in Cheltenham (where GCHQ is...)?
>invest billions of pounds to keep vehicles secure as possible
ORLY? Can I ask them to point me to billions of expenditures done to ensure security that had no other purpose. Unless they are talking physical security like crash safety, I don't think they have spent billions they would've otherwise saved by not developing secure systems in the cars.
The article talks about taking control of a Jeep Cherokee yet the captions shows a Jeep Grand Cherokee.
Does anyone know if the exploit is across the whole line with U-Connect? I plan on purchasing a Grand Cherokee but I may wait and see how this plays out (or get an older model).
I owned a grand cherokee for a few years. Can I offer my unsolicited advice? Don't do it.
Things I had go wrong:
- Brake rotors, replace THREE times at $600 each, all before 100k miles.
- Uneven wear on the tires due to incorrect braking (see above).
- Tail lamp that went out over a dozen times. Another known issue.
- Overheating in traffic after less than 30k miles with no real stress on the engine. The mechanic could never find that issue.
- I got out when the truck was running to close the back door because I didn't close it hard enough. The car locked me out and I had to call a locksmith to get it open. It was running the whole time.
And I'm sure I could think of more, those are just the ones that stand out. Seriously though, go read on a owners forum about them and then decide. I don't usually trounce products, but this one burned me. A $30k waste of money and time. I ended up getting rid of it, at a huge loss.
If it's related to the article from the other day, then yes;
It appears that they're gaining access via UConnect, which is connected to TMobile's cell phone network.
Allegedly, this exploit makes it vulnerable from anywhere that the vehicle would have TMobile cell coverage.
Though you may be able to pull the SIM. My Audi came with T-Mobile coverage and the SIM is behind a door in the center console (which I yanked, turns out Goole Earth over 3G isn't a very good nav system).
Agreed with one of the comments, why do articles like this never share any sorts of details about the hack? Is this not a responsible disclosure / patched?