Killer Apps & Life On Mars – Part 1
Going off-piste again this week, kinda. Randomly, I wondered just how easy it would be to capture and process stereo 3D timelapse with open-source tools. Having mostly ignored 3D, and being largely unimpressed by its cinema application, I was still wondering what the killer application for stereoscopic photography.
These days, a number of single lens cameras have a “3D” function which stitches together a number of exposures into a navigable image that allows the point-of-view (POV) to be changed, interactively. To my mind this is not really 3D, it’s like the moving lenticular images we used to collect in the 1970s. What I am interested in is true stereoscopic imaging, which requires genuine binocular vision to give a convincing effect of depth.
Tim Dashwood has written an excellent introduction to stereo/3D photography that I do not intend to duplicate, but what I am going to cover is the specifics of doing it with CHDK, FFMpeg and ImageMagick.
http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/
This is just an introductory blog post and I’m not going to get to any workflows just yet.
Stereo imaging has been around since 1840, almost as long as photography itself, and here are some amazing stereographs captured during American Civil War.
http://www.theatlantic.com/infocus/2012/02/the-civil-war-part-3-the-stereographs/100243/
Some of these give away the fact that they were show with one camera in two positions.
I was introduced to stereoscopic 3D it in the 1970s via my sister’s View-Master, but this was not much more sophisticated than the widely available stereo viewers from the nineteenth and early twentieth centuries.
The documented optimum lens separation for human vision is 65mm, give or take, and my eyes are pretty much exactly that distance apart. However, according to various sources larger separations work fine for some subjects and for landscapes and distant objects, a larger separations will work fine. I will be doing some tests with different sized subjects.
For reference, I am not interested in red/cyan anaglyph type image, because of the weird colour effects, but I am going to use this technique for the sake of being able to display them on vanilla monitors. Not everyone can do it, but I can also do the cross-eyed right-left trick too, although it’ not practical for any length of time.
Recently, I bought a 3D LED TV that supports passive polarised glasses and I saw an excellent demo of a Fujifilm W3 3D camera displaying media on LG monitors at a photography trade show in 2010.
There are some excellent existing resources for shooting stereographs with CHDK, including the StereoDataMaker site, and Gentles Ltd, and I’ll add more info about other resources soon.
I have mixed feelings about 3D and am not sure just what I really want to do with it, but how hard can it be? I was not sure how to prepare the media and assumed it would be more difficult than it is. Turns out processing pairs of images is very easy in ImageMagick, and as far as the polarised light monitors go, all the cleverness is done in the screen so you just have to give it 2 images side-by-side.
It never occurred to me that it would be so easy.
So, side-by-side is my eventual destination format, but using red/cyan anaglyph for convenience and online dissemination.
Anyway, I’m running out of time, but I might update this post later. In the meantime here are a few links and I’ll post more soon with come code and practical tests.
Stereoscopy.com
http://www.stereoscopy.com/gallery/
History of Sterography
http://www.arts.rpi.edu/~ruiz/stereo_history/text/historystereog.html