Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Things like traffic signals that actively communicate their status to nearby robot cars (more than just a red lamp that can be occluded by weather, other vehicles, or mud on the camera lens). Or lane markings that are more than just reflective paint, but can be sensed via RF. Rules around temporary construction that dictate the manner of signage and cone placement that the robot cars can understand. The cones might have little transponders in them, I don't know.

The problem you then face is that any of those could be forged / faked without some kind of way of securely validating the message in some way. You could cause absolute chaos by driving down the road broadcasting false messages. It's a little harder to hack and modify traffic light signals, for example. But we've also seen hackers screw up Tesla cars by sticking stuff on the back of their car to deliberate mislead it based on vision.



> You could cause absolute chaos by driving down the road broadcasting false messages

Even without self-driving cars, an "attacker" can go into a theater and yell "fire" and cause a stampede.

They can get a high-viz vest and clipboard, and stand in intersections directing cars to take detours they don't need and holding up traffic.

My point here is that society has a lot of trust baked in. We trust people don't just yell "fire" without reason. Just because it's FSD cars doesn't mean people will start broadcasting the equivalent of "fire" constantly. It's already easy to cause accidents.


Those attacks don't scale. They're also tied to a person or people who need to be physically present, making it possible to arrest them.

The consequences of potential attacks on centrally-orchestrated traffic are a lot more severe. Hack the control node, and you can stop traffic nation-wide. Or cause mass accidents that overwhelm first responders. And they can be executed by anyone, anywhere in the world, for a cost within range of many medium-size corporations (let alone nation states).

I won't comment on the challenges of the approach Tesla et al. are currently taking, but I don't think central control is the panacea commenters in this thread are making it out to be (and I'm personally glad this isn't the route we're pursuing).


An attacker can already do this, at scale. Whether it be overriding traffic lights to show green in all directions, or taking down critical air traffic control systems.

It’s like arguing that we can’t possibly build autonomous cars because then someone might turn it into an autonomous bomb.

Keep in mind that solving this is “worth” about 40,000 lives a year in the US - nearly $1 trillion in economic damages a year in life and property.

Bad things can always be done with good tools. As always, you provide layers of protection that make sense and in the end must rely on the underlying fabric of civilization to persevere.


> You could cause absolute chaos by driving down the road broadcasting false messages.

I’ve personally tested creating a fake toad sign and my tesla reads it as a real sign just fine.

I wonder what happens to autopilot on a 70mph freeway when it encounters a 5mph limit sign…


Bad bad things. I've had my Tesla on a highway either lose GPS precision and believe I was on an adjacent local road..

It immediately reduced speed from 60mph to 25mph .. aggressively.

This falls into a pattern of Tesla autopilot/NoA where it just doesn't seem to have much memory or foresight.

For example the car is driving itself on the highway, it knows it's been on the highway, for 20 minutes. I am not even in the exit lane, it knows what lane I am in. How could it think I am suddenly on the local road below the highway based solely on the GPS pin movement in the span of a second, without having moved to the exit lane and gone down the exit ramp?

For an example of lack of foresight - the car will happily speed towards an obvious semi-distant slowdown right until it needs to aggressively break from 60mph down to 30mph as it approaches following distance of the nearest car. I also find it can get really weird in stop&go traffic, not easing into speed, down to a stop very well as if it has only GO or STOP.


> For example the car is driving itself on the highway, it knows it's been on the highway, for 20 minutes. I am not even in the exit lane, it knows what lane I am in. How could it think I am suddenly on the local road below the highway based solely on the GPS pin movement in the span of a second, without having moved to the exit lane and gone down the exit ramp?

Oh wow. This happens often with a car-mounted GPS (or on a phone) and it's pretty annoying. Sometimes the GPS instructs you to do a U-turn at the next available fork in the road, and it takes a moment to understand what's going on.

But in a self-driving car it's terrifying! And absurd.


This is, of course, something that regular old meat intelligence is susceptible to as well. Remove stop signs, and you’ll cause accidents.


> any of those could be forged / faked without some kind of way of securely validating the message

Any reason routine crypto methods would not solve this? Seems like one of the easier parts to me.


Who controls the signing keys and the whole signing process? What about key revocation if someone steals the key? Will a municipality in Texas really be willing to not be allowed to create a new stoplight without approval from the federal agency in charge of the keys? What’s to stop someone stealing a “real” stoplight from bumfuck nowhere and putting it in the middle of the 101 at rush hour? What about replay attacks? What about signal jamming? Etc. etc.


You raise some genuine possible concerns but generically I'd probably tend to just say, "laws will stop them" just like they already stop someone deliberately endangering lives by placing a real fake stop light in the middle of the 101 at rush hour.

The real concern would be whether someone can engineer a terrorist level mass scale attack but as long as it requires physical tampering that adds up to a tremendous amount of work. So if the signalling is largely burned into fixed infastructure it eliminates a lot of that or at least sets the bar high enough that its probably more work than various other types of attack that are likely to be just as impactful.


ADS Mode B (used everywhere for airplanes) already works this way, and there is no authentication or signing whatsoever. It's a single global broadcast frequency (1090MHz).


> The problem you then face is that any of those could be forged / faked without some kind of way of securely validating the message in some way. You could cause absolute chaos by driving down the road broadcasting false messages.

Sounds like a job for...blockchain.


What value would blockchain bring to solving this problem?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: