Archive

Tag Archives: FFMpeg

3d stereo experiments.009

Yogi from Mars 3D

Going off-piste again this week, kinda. Randomly, I wondered just how easy it would be to capture and process stereo 3D timelapse with open-source tools. Having mostly ignored 3D, and being largely unimpressed by its cinema application, I was still wondering what the killer application for stereoscopic photography.

These days, a number of single lens cameras have a “3D” function which stitches together a number of exposures into a navigable image that allows the point-of-view (POV) to be changed, interactively. To my mind this is not really 3D, it’s like the moving lenticular images we used to collect in the 1970s. What I am interested in is true stereoscopic imaging, which requires genuine binocular vision to give a convincing effect of depth.

Tim Dashwood has written an excellent introduction to stereo/3D photography that I do not intend to duplicate, but what I am going to cover is the specifics of doing it with CHDK, FFMpeg and ImageMagick.
http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/

This is just an introductory blog post and I’m not going to get to any workflows just yet.

Stereo imaging has been around since 1840, almost as long as photography itself, and here are some amazing stereographs captured during American Civil War.
http://www.theatlantic.com/infocus/2012/02/the-civil-war-part-3-the-stereographs/100243/

Some of these give away the fact that they were show with one camera in two positions.

Landscape

I was introduced to stereoscopic 3D it in the 1970s via my sister’s View-Master, but this was not much more sophisticated than the widely available stereo viewers from the nineteenth and early twentieth centuries.

Stereoscoop_VM

The documented optimum lens separation for human vision is 65mm, give or take, and my eyes are pretty much exactly that distance apart. However, according to various sources larger separations work fine for some subjects and for landscapes and distant objects, a larger separations will work fine. I will be doing some tests with different sized subjects.

For reference, I am not interested in red/cyan anaglyph type image, because of the weird colour effects, but I am going to use this technique for the sake of being able to display them on vanilla monitors. Not everyone can do it, but I can also do the cross-eyed right-left trick too, although it’ not practical for any length of time.

Recently, I bought a 3D LED TV that supports passive polarised glasses and I saw an excellent demo of a Fujifilm W3 3D camera displaying media on LG monitors at a photography trade show in 2010.

There are some excellent existing resources for shooting stereographs with CHDK, including the StereoDataMaker site, and Gentles Ltd, and I’ll add more info about other resources soon.

I have mixed feelings about 3D and am not sure just what I really want to do with it, but how hard can it be? I was not sure how to prepare the media and assumed it would be more difficult than it is. Turns out processing pairs of images is very easy in ImageMagick, and as far as the polarised light monitors go, all the cleverness is done in the screen so you just have to give it 2 images side-by-side.

It never occurred to me that it would be so easy.

So, side-by-side is my eventual destination format, but using red/cyan anaglyph for convenience and online dissemination.

Anyway, I’m running out of time, but I might update this post later. In the meantime here are a few links and I’ll post more soon with come code and practical tests.

Stereoscopy.com
http://www.stereoscopy.com/gallery/

History of Sterography
http://www.arts.rpi.edu/~ruiz/stereo_history/text/historystereog.html

Advertisements

Well, FFMpeg is a monster. Getting it to work with some default settings was easy, but the number of parameters is breathtaking. I work in professional video on a daily basis but can hardly understand any of the plethora of options. There is extensive documentation on ffmpeg.org but most of it is meaningless unless you already know what it means.

Below is the basic structure of a Bash script. It requires the #!/bin/bash at the beginning

#!/bin/bash
clear
echo “Hello world.”

This script clears the screen and prints the words “Hello world”.

An excellent beginners’ tutorial is here and I’m not going to to duplicate it.
http://linuxcommand.org/wss0010.php

This process reminds me of teaching myself to program back in the 1980s. In a good way and a bad way. One frustratingly fatal error after another is a real test of application, and that’s the discipline, but teaching yourself may be very time-consuming but it is also very educational.
After some head scratching I got this to work. It’s a bit of a Frankenstein lash-up with bits of code I got from various places, but also with a little bit of my own intuition. I discovered by trial and error how to change the target directory on-the-fly within the script and this made it much simpler.

It’s still a bit clunky and needs the download-from-camera stage adding but it works. I decided to include absolute paths to the work directories and point the script at them in turn, and so far I am running the script using the command ./myscript from the current directory.

# !/bin/bash
cd /home/richard/Desktop/RBTest/newtest
x=1; for i in $(ls -t -r *JPG); do counter=$(printf %04d $x); ln “$i” /home/richard/Desktop/RBTest/testtemp/img_”$counter”.jpg; x=$(($x+1)); done
cd /home/richard/Desktop/RBTest/testtemp
ffmpeg -r 25 -i img_%04d.jpg -s 640×480 -qscale 1 -vcodec mjpeg movie.avi

The next bit I need to add is the download stage using gPhoto2 to get the images from the camera so that I can have a single workflow to download and assemble the thousands of images files in separate directories into a single usable video file.

Some of this coding documentation needs expanding and tidying up, but stay tuned for more soon.

Rockingham Street Q-Park car park, Sheffield, UK.

After a partially voluntary and partially involuntary hiatus, I am back in flight. After significant work commitments, then getting married with a honeymoon in Barcelona, ES, I am back in Sheffield, UK, the greenest city in England. As this project is currently unfunded, and solo, I’m updating things as-and-when.

I promised you some performance tests and here they are. I’m am perfectly satisfied with the installations of Ubuntu Linus 8.04.1 on blue-and-white G3s and 10.04 on grey-and-white  Macs. As suspected, the GUI is a major burden on the system and opening a folder of about 3,000 jpeg images can take a full 5 minutes because of the overhad loading all the thumbnails.

The solution is simple. Don’t.

For my purposes this is no problem. I want to use these machines as workhorses and don’t need the GUI.

I’m going to be using FFmpeg to convert and compress a folder full of images into a movie. However, I came across a well-documented problem immediately. FFmpeg will not continue if a.) the first file is not numbered xxxxxx1 and will fail if there is a gap in the numbering.

Fortunately, I easily found a Bash script to sort this out, and I found it here:

http://stackoverflow.com/questions/2829113/ffmpeg-create-a-video-from-images

x=1; for i in *jpg; do counter=$(printf %03d $x); ln “$i” /tmp/img”$counter”.jpg; x=$(($x+1)); done

(Also see the revision below).

This works but note the format of the filename extensions is case-sensitive. Helpfully (not), the script just fails if you do not get this correct. It renames all the images in a folder beginning at xxxxxx1, so if you sequence does not, or it is missing one or more images in the middle, it will not cause FFmpeg to fail. I’ll deconstruct this script in a later blog post.

Even more usefully, Linux does not actually duplicate the binary files, it just creates link files which can be safely deleted afterwards.

These tests were done on a 1 GHz PowerMac G4 with 640 Mb RAM & 21 Gb HD.

Rename 2933 files – 23 seconds

Using FFmpeg:

Encode jpeg files at 1600×1200 to mjpeg – approx 9 min 30 secs – 192.4 MB
Encode jpeg files at 1600×1200 to mjpeg – approx 8 min 11 secs – 192.4 MB
Encode jpeg files at 1600×1200 to mjpeg – approx 8 min 12 secs – 192.4 MB

Encode jpeg files at 640×480 to mjpeg AVI – approx 10 min 15 secs – 40.4 MB – atrocious quality
Encode jpeg files at 640×480 to mjpeg AVI – forgot to time it but similar – 199.5 MB – quality set to 1

This last set of tests was done whilst messing about within the GUI. I don’t really know enough about what is going on under the hood, but it seems that things slow down a lot and this might be due to caching thumbnails or, err, whatever. After a restart performance increased dramatically.

Rename 2,933 items – 1’05’’
Rename 2,933 items – 1’12’’
Rename 2,933 items – 0’20’’ (after restart)

For my purposes the performance seems acceptable at this stage.

One other revision I had to make was to compensate for the original file numbering restarting. Canon Powershot cameras create jpeg images with filenames of the format “IMG_xxxx.JPG” and shooting timelapse with them quickly reaches numbers in excess of 9,999. For consumer-level snap cameras, this number is probably well in excess of a normal user’s needs but I can easily shoot that many in one day. Consequently, the image numbers cannot solely be relied upon to reference the images in the correct chronological order, hence this revision which sorts them into creation order before renaming.

x=1; for i in $(ls -t -r *JPG); do counter=$(printf %04d $x); ln “$i” /Users/richardbolam/Desktop/testout/img_”$counter”.jpg; x=$(($x+1)); done

I’m still a bit of a noob when it comes to Bash scripting, but I will write another post soon that disassembles these scripts, partly for your benefit, but also to help me learn more about what I’m doing. This script was revised with the invaluable help of the technical staff at Access Space. Stay tuned.

access-space.org

This is how I imagine myself, but I think I would stand somewhere else. (Photo: archive.org)

Q. Why are we doing this?

Flying Monkey TV was conceived as a collaborative documentary filmmaking project using low-impact methods and available technology.

A. Because we can. Because it’s a great idea. Because we want to.

We have had a small amount of funding in 2010 from Arts Council England, via The Culture Company,  but the project is currently unfunded. However, I decided to embark upon the development stage of getting the FMTV software up and running so that we can use it to do more critical content tests.
http://www.artscouncil.org.uk/
http://www.theculturecompany.co.uk/

Having knocked about at Access Space for many years, I am acutely aware of how much redundant technology is lying around, unused or at least underused. So, I decided to try and press some of these sleeping monkeys into service in order to get more horsepower(?) for the unenviable task of post-processing.
http://access-space.org

As I keep saying, shooting is easy, and that’s the problem. I am developing ways of using old computers to post-process the hundreds of gigabytes of timelapse media that I can capture on the two-dozen or so CHDK-enabled Canon Powershot cameras that I have.
http://CHDK.wikia.com

My aim is to learn Bash-shell UNIX commands and programming on Linux-based machines so that we can create a suite of software tools to compile, scale, crop etc images files into movie files. Some of these tools may exist already, and our aim is to use what exists, and create what doesn’t.

Q. Why run Linux on a Mac?

I do not necessarily need OS X at all for my purposes, but it’s very convenient to have a mature and stable GUI on any machine. After some research and advice from Access Space, it seems that using the freely available Linux libraries gPhoto2 (for accessing the on-camera files) FFMpeg (for assembling images into movies), and (amongst others) ImageMagick for manipulating images.
http://gphoto.org/
http://ffmpeg.org/
http://www.imagemagick.org/

A. Because we’ve got some, and they’re not doing anything else.

Many of these libraries are also available for OS X Macs via the MacPorts project, and I will be doing some performance comparison tests to see which route is more efficient.
http://www.macports.org/

Q. So if I can run Shell scripts on OS X anyway, why use Linux at all?

Access Space has a very strict policy of using free, open-source and legal software. I am not as philosophically pure, but I like to remain legal. At home all my Macs run on a retail multi-license of Snow Leopard, but as far as I am aware, I cannot buy earlier versions of OS X, and in the absence of the original install disks, Linux is the only legal choice.

Actually, installing Ubuntu on a PowerMac G4 was pretty trivial, but not at first.

The first attempt was using the Debian 6.0.4 PowerPC net install image from a CD boot disk. This worked fine until reboot, and then left me with a black screen. The install seemed to have gone fine but I was unable to get it to drive the monitor correctly once the GUI started. After 4 hours of clutching at straws and unsuccessful editing of the Xorg file I decided to try another approach.

With the help of Access Space, the second attempt was much more successful, using a downloaded CD image of Ubuntu Linux 10.04 Lucid Lynx, and it worked first time. This was on a 450 Mhz PowerMac G4 with 256 Mb RAM, although it did feel a bit sluggish, probably due to the low memory.
http://cdimage.ubuntu.com/ports/releases/lucid/release/

As an additional test, I installed Ubuntu 10.10 on another G4 and then followed the upgrade path suggested by the installer to 11.04 Natty Narwhal, but this led to hundreds of error messages during install and failure to boot. So, for the time being I’m sticking to 10.04 on the PowerMac G4.

One other G4 failed to boot after install but I suspect it is a hardware error with the hard disk, rather than the software.

I also created a dual boot on my first-generation, and distinctly cranky, MacBook 1.8 Hhz Core Duo, with OS X 10.6 Snow Leopard on the Mac partition and Ubuntu 11.10  Oneiric Ocelot on the other partition. The partition was created using the Bootcamp utility and the installation was performed from a CD boot disk from Ubuntu.
http://releases.ubuntu.com/11.10/

I also installed the rEFit boot menu as detailed in this how-to guide.
http://scottlinux.com/2011/06/14/how-to-dual-boot-os-x-and-linux/

I also installed MacPorts in order to use the same libraries, but it is unreliable on this machine. I believe it is the knackered old Macbook that is the problem, and I have MacPorts working fine on other machines.

A. Macs are relatively expensive and my purpose is utility, so that I can use the Macs that I’ve got, but also press PCs into service as I stumble over them.