Hacker News new | past | comments | ask | show | jobs | submit login
3D Lightning (2013) (calculatedimages.blogspot.com)
184 points by fanf2 on March 8, 2018 | hide | past | favorite | 17 comments



This is really cool.

When I first saw this, I re-did the 3D reconstruction as a bundle-adjustment problem, which resulted in this model:

http://misc.tomn.co.uk/lightning/out.gif

To do this, I found point correspondences between the two images, and set up a model of the two cameras looking at these points. The camera parameters and point positions were then optimised to minimise the reprojection error (the distance between the real positions of the points in the images and the 3D position projected through the camera models).

This was implemented using ceres, which is worth a look if you're interested in this kind of thing.

It took quite a bit of fiddling to make this work, mostly estimating the camera parameters manually by hunting around google maps for landmarks. There's not very much data to work with, so the optimisation tends to get lost quite easily without doing this.

The results are kind of similar, but it's impossible to say which is closer to the real thing. The reprojection error ended up pretty small (mostly a few pixels), but there were some errors that I was unable to reconcile --- likely due to the difficulty of manually estimating point correspondences for a 3D line in space, possible errors in my constraints, lens distortion etc.


This ceres, or something else? http://ceres-solver.org/


I'm one of the founders of Ceres Solver; let me know if you need any help.


Yeah, that one. It's completely overpowered for this project!



Also top comment in a thread from yesterday: https://news.ycombinator.com/item?id=16542395


Question...I was always told that lightning seeks the highest point on the ground. If that’s the case, this path doesn’t seem to be very efficient in getting there.

How does lightning actually choose it’s path?

Enlighten me. Sorry. Couldn’t resist.


The explanation I've seen is that multiple leads actually do a random walk search algorithm of sorts until one connects. https://earthscience.stackexchange.com/questions/580/why-doe...


It takes the path of least (electrical) resistance.


There's also a good writeup in xkcd what-if[1]

https://what-if.xkcd.com/16/


> This means the pair of images are roughly a stereo pair, but with a vertical shift instead of a horizontal. This is just like the pair of images you would see with your eyes if you had two eyes positioned vertically instead of horizontally on your head.

OK, so why can't you just rotate both images 90 deg, and view them as a stereo pair?


Like with stereoscopic glasses? People seeing the stereoscopic effect relies on the displacement between the photos being equal to the distance between human eyes.


It works with other distances, the perception of depth just scales accordingly. If you view a pair of images taken with a very wide distance, the depth will look small, and the scene will look like a model.


This is called "hyper stereo", and is often used when the subject is very large[1].

To take a 3D photo, say, of a mountain, one needs to space the shots several meters apart if one wants to get a noticeable stereo effect.

[1] https://en.wikipedia.org/wiki/Stereo_photography_techniques#...


Yes. I've played with the images, and the angular displacement is too great.



Images seem half working? I can't actually see the end result.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: