Hacker News new | past | comments | ask | show | jobs | submit login

No, snow scatters the emitted light and scarcely any is returned at all. A wall covered in ice looks like an air gap.

(I know this because my very expensive 800kg autonomous vehicle almost drove into a lake under my software control due to this effect...)




Aaaah, a domain-knowledge expert, thaaaank god!

Actual information on this topic is sorely lacking....

....While we have you on the line, would you please contribute some more information on LIDAR w/r/t autonomous vehicles?

For example, how well does LIDAR work with rain? fog? How does LIDAR interact with other nearby LIDARs? How much power does a typical system emit? How sensitive is it to jamming? How many frames per second can be acquired? How much computing power is used to re-assemble a scene?


Generally LIDAR "works" reasonably in light rain because the rain drops scatter most of the emitted beams, and you get no returns. Occasionally you'll hit a raindrop straight on and get a reflection back to the receiver, but your algorithms should probably filter this out.

Fog is so much denser that you get heaps of reflected returns, and naive algorithms would treat this as there being lots of solid stuff in the environment. Your best bet is to combine sensors that don't share the same EM bands; e.g. lidar + camera, or + radar, sonar, etc. It's not just redundancy, but the ability to perceive across different frequencies so that things that scatter or absorb in one band don't do so in another, allowing you to distinguish.

I don't think one LIDAR would interact with another to any great extent. Even if off-the-shelf models did, there'd ultimately be some way to uniquely identify or polarise the beams such that this wasn't a problem. I suspect its reasonably easy to engineer a solution around this.

Power emission I'm not entirely sure about, but almost all are at least class 2 laser devices. You shouldn't point an SLR camera at Google's vehicles for example, as you can destroy the CCD. The Velodyne they use draws 4-6amps at 12V, but a lot of that power goes to heating the emitter, motors, etc, and isn't all emitted by any means.

Frames-per-second isn't really the right measure for Velodynes, but their max rotation is 3-4 revs per second IIRC. It's about a million points per second. (For something like the Kinect For Windows V2, which is a flash lidar, it should run at 30fps, but with lower depth resolution.)

I can't think how you could 'jam' a lidar, but you can certainly confuse the crap out of it easily enough. Scatter some corner reflectors on the roads for example, dust grenades, fog cannons, etc.

Computing power to reconstruct is significant (many DARPA Grand Challenge teams had problems containing their power budget for CPUs/GPUs) but manageable. It depends on the algorithms used, and in many cases the amount of "history" you infer over. Google's approach is to log everything, post-process into static world maps, then upload those maps back to the vehicles. When they're actually driving for real, those maps effectively let them take the delta between what they currently see and what the static map says there should be, and they only really have to handle the differences (i.e. people, cars, bikes, etc). This is still hard, but it's much easier than e.g. the Mars Rover problem (more my area of experience)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: