The really impressive thing about Photosynth is how far it has come in the last year. It used to take hours to weeks to create synths from hundreds of pictures. The processing time is dominated by
1. Keypoint detection (SIFT or a similar algorithm)
2. Keypoint matching for each pair of images with
approximate nearest neighbor (ANN) search on kd-trees
3. Structure from motion to recover rotation and the relative location of photos
Any details available on how they're speeding up each of these steps? The only research publications I see are the original work and the new 'finding paths' paper presented last week at Siggraph.
Yeah, the papers only discuss how they combine known algorithms[1]. They could be speeding up the process using any combination of these:
1. Using the order of the images to reduce matching from O(N^2) to something closer to O(N) since people are more likely to take pictures sequentially instead of randomly. I'm going to run some tests next time I boot into Windows.
2. Running keypoint detection as the images are uploaded and potentially slowing down the upload speed to make it appear more seamless.
3. New keypoint detection algorithm for constant-time speedup. SURF, for instance, is about 5x faster than SIFT.
4. Using GPUs. The keypoint detection and matching steps are perfect for running on parallel GPUs. A couple recent papers showed 10-15x speedup for keypoint detection and KD-tree construction and search on GPUs, compared to similarly priced CPUs.
Any other ideas?
[1] There's also a new paper called Skeletal graphs for efficient structure from motion that discusses a neat graph algorithm for the SfM step.
1. Keypoint detection (SIFT or a similar algorithm)
2. Keypoint matching for each pair of images with approximate nearest neighbor (ANN) search on kd-trees
3. Structure from motion to recover rotation and the relative location of photos