When Ford released the Pinto, they determined the cost of simply paying out potential lawsuits for driver deaths would be less than the cost of actually fixing the cars' tendency to explode when rear-ended, so they just let people die until the numbers balanced out[0].
Granted, that was a hardware issue and not a software issue, but if the risk of a class-action lawsuit is involved, and that risk is magnified by a recall or public awareness of the problem, then the motivation may be the same.
>In the very near future, you might have to become 'certified' as a programmer for anything that touches the internet.
I think a better option would be to mandate that certain things can't touch the internet. It would be absurd if I, someone who writes PHP/SQL and C# crud apps for a living, had to be certified as a systems programmer (or the other way around.)
The problem to me is the insidious way we as a society have sort of collectively accepted that connecting things to the internet is inevitable and always a good idea. Instead of making sure the engineers who connect the car's brakes and steering to the internet through the radio are competent, how about not making that decision to begin with?
The Pinto analogy is dangerous. The flaw in the Pinto was that when it got into a certain, common type of collision there was a higher-then-normal chance that its fuel tank might explode.
The risk was NOT that because of this vulnerability, there was a chance that bad guys might go around ramming into Pintos to make them explode. Who would even do that? Or maybe that if you rammed a knitting needle into the back of a Pinto it would trigger a fuel leak. Well so what? Probably similar issues exist in any number of cars, but it's not something people would actually do (and if they did it would obviously be criminal), so... that's not really an engineering flaw.
So the insecurity of infotainment systems in cars is NOT like the Pinto issue. Nobody is suggesting that when the infotainment system plays a Mariah Carey song it will, because of a bug, shut off the brakes. Or that during a common kind of collision, the infotainment system causes the airbags to fail to fire. No, this is only a risk if someone actively attacks the infotainment system and deliberately shuts off the brakes, or triggers an acceleration, or whatever.
There are already ways to physically sabotage a car to make it dangerous to drive - cut the brake lines, puncture the fuel tank, whatever. Those are, to a certain way of thinking, security vulnerabilities. If you're someone who is concerned about people trying to kill you, you probably take extra precautions to secure your car to mitigate those risks. Ordinary people just park their car on the street and assume assholes are not going to trigger a slow leak in their brake fluid.
So why should ordinary people be concerned that someone might install a trojan in their infotainment system which is going to shut off their brakes on the highway?
The chance may be low, but I think people should be concerned because it exists at all. It doesn't have to be possible to install a trojan that shuts off a car's brakes on the highway - that possibility is the result of engineering choices and compromises which could have, and still could, be avoided. It doesn't have to be an acceptable level of risk for drivers to take on themselves, either. That there's little chance of someone taking advantage of it is irrelevant.
My point is that we're used to thinking of computer security as being an absolute necessity because, for example, it is of value to a bad guy to simply take control of a network-connected computer to become part of a botnet. Or to use it to obtain valuable personal data which passes through the computer. But I think that leads us to overestimate the risk created by the possibility that hacker activity could extend to the physical realm, because if a hacker crosses that line, it has consequences.
My car's infotainment system doesn't know my credit card number. It probably makes a pretty poor botnet member (but even if someone does hack it and turn it to churning out spam they're not going to be trying to get to the CAN bus). And there's little value to a badguy in just breaking my car for no reason. If someone just wants to break cars and hurt people, they can go throw rocks off an overpass. People do do that, admittedly, but they're not revealing an engineering flaw in cars by doing so.
If you can use this hack to pop the door locks and override the ignition, then maybe we should be talking.
When Ford released the Pinto, they determined the cost of simply paying out potential lawsuits for driver deaths would be less than the cost of actually fixing the cars' tendency to explode when rear-ended, so they just let people die until the numbers balanced out[0].
Granted, that was a hardware issue and not a software issue, but if the risk of a class-action lawsuit is involved, and that risk is magnified by a recall or public awareness of the problem, then the motivation may be the same.
>In the very near future, you might have to become 'certified' as a programmer for anything that touches the internet.
I think a better option would be to mandate that certain things can't touch the internet. It would be absurd if I, someone who writes PHP/SQL and C# crud apps for a living, had to be certified as a systems programmer (or the other way around.)
The problem to me is the insidious way we as a society have sort of collectively accepted that connecting things to the internet is inevitable and always a good idea. Instead of making sure the engineers who connect the car's brakes and steering to the internet through the radio are competent, how about not making that decision to begin with?
[0]http://www.motherjones.com/politics/1977/09/pinto-madness