Screen shot 2014-01-15 at 08.58

Raspberry Pis with camera modules, ready for testing.

Okay, it’s a whole year since my last post on this blog. I’ve been busy.

Flying Monkey TV was selected for an Arts Council England (ACE) funded mentoring scheme in 2010, via The Culture Company‘s Artimelt programme, aimed at incubating early-stage projects that might then go on to greater things.

Although the project has had some funding early on, it was not much and nothing ongoing. We also applied for funding from Umbro Industries, 4IP (now defunkt) and Nesta‘s Digital R&D programme, amongst others, but no joy so far. I suspect I am a whole generation too old for Umbro and we didn’t seem to tick the right boxes for Nesta. In the meantime, I’ve decided to stop applying for stuff and start applying stuff, if you see what I mean. Life is short and funding applications are a very onerous task best left to professional administrators.

At least to give ACE their money’s worth, I have been continuing to develop techniques and accumulate equipment to progress the project despite its lack of funding. The downside is that it gets done when it gets done. Anyway, out of my own pocket I’ve just acquired a little testbed of equipment that I can use to test my ideas.

If you were reading my Flying Monkey TV Missing Link blog from a couple of years ago, you will know that some of my technical research of the time, thanks to a small bursary from Access Space, Sheffield UK, was rendered entirely pointless by the launch of the Raspberry Pi platform. It’s taken me a while to get around to it, but late last year I got my first Pi and this week have just accumulated three more, and all four with camera modules. I have only just got together all the various bits and bobs to make it into a functional network, but stay tuned for more shenanigans.

I am also testing battery-powered setups and solar charging, so expect a few passes close to the sun.

3d stereo experiments.009

Yogi from Mars 3D

Going off-piste again this week, kinda. Randomly, I wondered just how easy it would be to capture and process stereo 3D timelapse with open-source tools. Having mostly ignored 3D, and being largely unimpressed by its cinema application, I was still wondering what the killer application for stereoscopic photography.

These days, a number of single lens cameras have a “3D” function which stitches together a number of exposures into a navigable image that allows the point-of-view (POV) to be changed, interactively. To my mind this is not really 3D, it’s like the moving lenticular images we used to collect in the 1970s. What I am interested in is true stereoscopic imaging, which requires genuine binocular vision to give a convincing effect of depth.

Tim Dashwood has written an excellent introduction to stereo/3D photography that I do not intend to duplicate, but what I am going to cover is the specifics of doing it with CHDK, FFMpeg and ImageMagick.
http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/

This is just an introductory blog post and I’m not going to get to any workflows just yet.

Stereo imaging has been around since 1840, almost as long as photography itself, and here are some amazing stereographs captured during American Civil War.
http://www.theatlantic.com/infocus/2012/02/the-civil-war-part-3-the-stereographs/100243/

Some of these give away the fact that they were show with one camera in two positions.

Landscape

I was introduced to stereoscopic 3D it in the 1970s via my sister’s View-Master, but this was not much more sophisticated than the widely available stereo viewers from the nineteenth and early twentieth centuries.

Stereoscoop_VM

The documented optimum lens separation for human vision is 65mm, give or take, and my eyes are pretty much exactly that distance apart. However, according to various sources larger separations work fine for some subjects and for landscapes and distant objects, a larger separations will work fine. I will be doing some tests with different sized subjects.

For reference, I am not interested in red/cyan anaglyph type image, because of the weird colour effects, but I am going to use this technique for the sake of being able to display them on vanilla monitors. Not everyone can do it, but I can also do the cross-eyed right-left trick too, although it’ not practical for any length of time.

Recently, I bought a 3D LED TV that supports passive polarised glasses and I saw an excellent demo of a Fujifilm W3 3D camera displaying media on LG monitors at a photography trade show in 2010.

There are some excellent existing resources for shooting stereographs with CHDK, including the StereoDataMaker site, and Gentles Ltd, and I’ll add more info about other resources soon.

I have mixed feelings about 3D and am not sure just what I really want to do with it, but how hard can it be? I was not sure how to prepare the media and assumed it would be more difficult than it is. Turns out processing pairs of images is very easy in ImageMagick, and as far as the polarised light monitors go, all the cleverness is done in the screen so you just have to give it 2 images side-by-side.

It never occurred to me that it would be so easy.

So, side-by-side is my eventual destination format, but using red/cyan anaglyph for convenience and online dissemination.

Anyway, I’m running out of time, but I might update this post later. In the meantime here are a few links and I’ll post more soon with come code and practical tests.

Stereoscopy.com
http://www.stereoscopy.com/gallery/

History of Sterography
http://www.arts.rpi.edu/~ruiz/stereo_history/text/historystereog.html

A640 rotated_84_2

Having been flying around the country over the past couple of weeks, I’ve shot a lot of timelapse out of the front windscreen. Just because I can.

It’s largely thanks to CHDK and I have managed to record myself getting lost on routes between Sheffield, Lancaster, Leeds, Newcastle, Nottingham and York. I have a few plans for this media, but one journey is already online here:

This was quite a successful Flying Monkey TV experiment and I had it edited and online within three hours of getting home.

These journeys were mostly shot on a Canon PowerShot A560, timelapse-enable with CHDK, and mounted on the inside of the front windscreen with a suction mount (see below). I mounted the camera hanging from the top of the screen, upside down. This meant that it was in line with the passenger side roof pillar and hence did not obscure my view.

I don’t use the auto-rotate feature in most cameras because it sometimes gets confused and can give you a few incorrectly rotated frames here and there. Consequently, shooting this way, I end up with an upside-down video that then needs rotating.

559CANON_full_2048_03pct

I used to drop them into Final Cut Pro, render them and export them to a new movie file, which works fine, but I think it’s time to write a Bash script to do it.

No sooner said than done! Well, not quite, but it was much easier than I imagined. As usual Google managed to find some very helpful resources for me to cannibalise. Writing the code is much faster than documenting it into a usable blog post.

I started by reminding myself how to create a basic for-next procedure (detail included here for other noobs). Here is a bit of basic code with lists the files in the present working directory ending in “.JPG” (don’t forget this is case-sensitive) and echos them to the screen. The semicolons separate the statements and the “done” terminates it.

for i in $(ls *.JPG); do echo $i; done

For the next step, instead of just listing the file names to the screen, I added the ImageMagick step to rotate the image and write it over the original.

for i in $(ls *.JPG); do convert $i -rotate 180 $i; done

As you might have already gathered, I like some progress feedback so added an echo with the file count and file name.

for i in $(ls *.JPG); do convert $i -rotate 180 $i; x=$(($x+1)) ;echo “$i $x”; done

However, a current file number is only of use if you know how many more to go. A bit of googling revealed this forum thread and the code:

ls -l | wc -l

This lists all the files in the current directory and then pipelines that list to give an integer count of the items in that list. It’s not necessary if this is just a temporary folder and it’s cleared between uses, but if you want to filter the file types you can add a wild card search like this:

ls -l *.JPG | wc -l

I found information about the two commands on linux.about.com and a forum thread which combines the two on unix.com.
http://linux.about.com/od/commands/l/blcmdl1_ls.htm
http://linux.about.com/library/cmd/blcmdl1_wc.htm
http://www.unix.com/unix-dummies-questions-answers/36490-ls-command-listing-number-files.html

So my final piece of code (for this iteration at least) is here:

x=0; c=$(ls -l *.JPG | wc -l); for i in $(ls *.JPG); do convert $i -rotate 180 $i; x=$(($x+1)) ;echo “$x /$c $i”; done

This sets x to be 0, c to be the number of files with filenames ending in “.JPG” in the current directory, rotates each one of them by 180 degrees, overwrites the original file and echos the file number, the file count and the filename. It’s pretty basic and I’ll roll it into a script soon.

There may well be better ways of doing this, but it worked well as an exercise to reinforce my learning.

I’m going a bit off-piste with this post, and looking at the hidden-in-plain-view potential of some of the libraries, very familiar to Linux geeks, but pretty much unknown to the outside world.

Although I talk about the past a lot, I’m not nostalgic. I don’t miss the frustrations of the past. I just like to remind myself how good things are now. By the way, I am by no means a computer historian, and my opinions and experience are subjective. If I post anything that is factually incorrect, please let me know.

I realise some of this might be bleeding obvious to the Linux world, but those of us cloistered for years in the walled-garden of the Mac OS X GUI, it’s quite a revelation.

As I’ve said before, I’m a lapsed programmer and used to write commercial software using framework applications, primarily FileMaker Pro 3.0 amongst others. FileMaker has developed into a very sophisticated rapid application development (RAD) platform now that FileMaker Inc have got around to including the features that us developers were clamouring for in the 1990s.

Given a choice, I am very happy tinkering with nice GUI-based visual programming / scripting tools, but the compelling reason for me to get my hands dirty at the command line is the fact that nothing gets in the way.

One of my most enduring love / hate relationships has been with HyperCard / SuperCard. I remember being excited, almost to the point of peeing myself, when I first got wind of HyperCard. My programming days started with BBC BASIC, then Spectrum BASIC, then the woefully under-implemented Commodore 64 BASIC. After that things just got worse and worse and Atari ST BASIC was absolutely useless. I can’t even remember if there was an implementation for the Amiga, but I did try CanDo which was promising but way ahead of its time. As the years passed it got more and more hard work to get to a command line and I kind of missed the direct simplicity of BBC or Spectrum BASIC, which were at least broadly complete implementations that you could actually do stuff with.

Update: I completely forgot about STOS on the ST and AMOS on the Amiga, which were both brilliant.
http://en.wikipedia.org/wiki/STOS_BASIC
http://en.wikipedia.org/wiki/AMOS_(programming_language)

HyperCard was a revelation to me as it was on the desktop, and you could easily create GUI-based applications with it, and it had a pretty decent plain-English programming language. You could actually do stuff with it.
http://en.wikipedia.org/wiki/HyperCard

As desktop computers “improved”, programming became more and more remote from the user, and there was a very frustrating period in the late 80s and early 90s where application software was not mature enough, but programming not accessible enough, to plug all the gaps in our productivity.

Fortunately those days are over.

There were many good things about HyperCard, but it was lacking in some fundamental functions, and for me, they were colour and structured graphics. HyperCard was strictly monochrome and bitmap only, and although Apple eventually did include plugins that supported colour graphics, it was an afterthought and not adequately implemented. Apple neglected HyperCard for years and it is now a minority interest tool. I still use it for programming on old compact Macs as it is one of the few programming tools that will work on machines with no more than 4 Mb RAM.

Silicon Beach SuperCard seemed to be the obvious successor, the so-called “HyperCard on steroids”. Its own language, SuperTalk, was an extended and mostly compatible development of HyperCard’s HyperTalk, and it supported 8 bit colour and script-controllable vector graphics.
http://en.wikipedia.org/wiki/Silicon_Beach_Software

SuperCard, whilst not neglected in the same way, passed from one owner to another for years, developing only gradually. I cannot tell you just how closely related it is to the original, but SuperCard appears to have evolved into something called LiveCode by Runtime Revolution.
http://www.runrev.com/

LiveCode deploys on a number of platforms including Windows, Linux, OS X, iOS and Android, and looks very promising. However, it seems to be lacking one of the fundamental features that I need, and that is to append single images to a movie file. Please correct me if I’m wrong, but I can’t find any mention of it, although I’m still looking into it.

And this is my big frustration when using very-high-level development tools. I tried using Apple Automator and QuickTime Pro player to assemble digital stills into movie files. Again, maybe it’s there but I couldn’t find it without having to operate the menu items in the player, and I really don’t want to go back to the days of using marionette software like Keyquencer (remember that?) or similar to operate programs via their menus. In my experience, automated clicking of buttons and selecting menu items is just not reliable or fast enough.

Adobe Photoshop’s Actions is a very powerful tool but it’s quite slow (although things may well have changed since CS3).

And this is when we get back to Linux, GIMP and ImageMagick. Whilst I have been banging my head against a brick wall for years, looking for the ideal development platform, Linux tools and libraries have been quietly maturing under my nose. I recently looked into using ImageMagick, which can do, well, everything. For the uninitiated, ImageMagick is a “software suite to create, edit, compose, or convert bitmap images” and can also procedurally create images by manipulating graphics or drawing shapes.

There are some very coherent and complete help files, written by Anthony Thyssen, and they seem to go on forever, documenting more and more features, available via command line. And the command line bit is the real killer because it means I can write a program and just call the single function I want, and add it to a Bash script.
http://www.imagemagick.org/Usage/

My own interest is in batch-converting, montaging of multiple images and format conversion of data sets, often comprising tens of thousands of images. I have managed to use gPhoto2 and FFmpeg to assemble the images into movies, and ImageMagick / GIMP will help me to manipulate the images if need be.

More soon…

The script here is only a slightly cleaned up version of the previous one, but also with a major addition. A timer.

It may be just that I’m new to Linux, but getting the syntax correct was very unintuitive. It took a lot of trial-and-error to get the spaces in the right place, and it reminds me a lot of programming in the 1980′s where you just get a “syntax error” message and nothing else.

This timer function uses the classic technique of storing the system time at the start of the script and then again once the script has finished, subtracting the former from the latter and the difference is the time taken.

This function only returns an the number of seconds as an integer, and a result of “1350575350″ might as well be in grains of sand. So, I have then reformatted the output to a more human-readable minutes and seconds by dividing the number of seconds elapsed by 60 to get the minutes and deriving the modulus (what’s left over) for the number of seconds. I could also do a similar thing with the value 3600 if I wanted to format it in hours, minutes and seconds.

Strangely, when I tried date +%2s (which returns the current system time in seconds) at the command line in OS X it failed, but not on the same machine whilst booted into the Linux partition. Turns out, it was a typo but Linux is happy to ignore this glaring mistake (should be date +%s) whilst OS X is not. Strange when it’s so particular about other syntax.

However, all complaining aside, I have got some actual useful working code up and running much more easily than I had expected.

# !/bin/bash

clear
echo “Hello Monkey Planet”
starttime=$(date +%s)

cd /home/richard/Desktop/RBTest/newtest
rm *.JPG
rm *.jpg

cd /home/richard/Desktop/RBTest/testtemp
rm *.JPG
rm *.jpg

cd /home/richard/Desktop/RBTest/newtest
#gphoto2 –get-all-files

x=1; for i in $(find $(pwd) -name \*.JPG | xargs ls -t -r); do counter=$(printf %04d $x); ln “$i” /home/richard/Desktop/RBTest/testtemp/img_”$counter”.jpg; x=$(($x+1)); done

cd /home/richard/Desktop/RBTest/testtemp
ffmpeg -r 25 -i img_%04d.jpg -s 640×480 -qscale 1 -vcodec mjpeg movie.mov
rm *.jpg

stoptime=$(date +%s)
fulltime=$(expr $stoptime – $starttime)
fulltimesecs=$(expr $fulltime % 60)
fulltimemins=$(expr $fulltime / 60)

echo “$fulltimemins minutes $fulltimesecs seconds”

I suppose it would be good to structure this programming at some point, such as the timer function, so that I could call it from any script. There probably is a function in Linux already.

More soon…

THE CODE PUBLISHED HERE COMES WITH ABSOLUTELY NO WARRANTY WHATSOEVER, SO PLEASE USE AT YOUR OWN RISK.

I am basing my code on existing bits of open source stuff scavenged from various places and cobbled together. At some point in the future I hope to make some sort of point-and-click front end for it, but in the meantime it’s just command-line code, and at the moment, pretty crude.

Here is a slightly updated version of my previous script, which now clears the work folders and downloads from the camera using gPhoto2.

# !/bin/bash
clear
echo “Hello Monkey world”
#gphoto2 –

cd /home/richard/Desktop/RBTest/newtest
rm *.JPG
rm *.jpg
cd /home/richard/Desktop/RBTest/testtemp
rm *.JPG
rm *.jpg

cd /home/richard/Desktop/RBTest/newtest
gphoto2 –get-all-files

x=1; for i in $(ls -t -r *JPG); do counter=$(printf %04d $x); ln “$i” /home/richard/Desktop/RBTest/testtemp/img_”$counter”.jpg; x=$(($x+1)); done

cd /home/richard/Desktop/RBTest/testtemp
ffmpeg -r 25 -i img_%04d.jpg -s 640×480 -qscale 1 -vcodec mjpeg movie.avi

This is pretty clunky but at least it’s getting somewhere. At some point I’ll find out how to reference files case-insensitive.

On some of the older cameras, such as my A620, the camera saves images in discrete folders of only 200 images each, which means a lot of effort in post-production if it has to be done by hand.

I think the next logical steps are to compile multiple folders of images and also to name the destination movie file discretely. The name could be created from some format information, a date/time stamp, and the model of the camera which can be derived from gPhoto2.

Googling something about recursive folders I found this:

http://stackoverflow.com/questions/245698/list-files-recursively-in-linux-with-path-relative-to-the-current-directory

However, this does not return them in creation time order and, as far as I could google, there doesn’t seem to be an argument to produce that.

After googling something about Linux pipeline commands I found a reference to xargs, and Bob’s your uncle. I can hardly believe I found it so quickly, but I remember Martyn at Access Space telling me about pipelining so I used a bit of intuition and worked out how to take the results from the find command and sort them afterwards.

http://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/

Here is my Bride of Frankenstein, which takes a folder full of folders (full of images), lists them in creation order, renames them and compiles them into a movie file. Simples!

# !/bin/bash
clear
echo “Hello Monkey Planet”

cd /home/richard/Desktop/RBTest/testtemp
rm *.JPG
rm *.jpg

cd /home/richard/Desktop/RBTest/newtest

x=1; for i in $(find $(pwd) -name \*.JPG | xargs ls -t -r); do counter=$(printf %04d $x); ln “$i” /home/richard/Desktop/RBTest/testtemp/img_”$counter”.jpg; x=$(($x+1)); done

cd /home/richard/Desktop/RBTest/testtemp
ffmpeg -r 25 -i img_%04d.jpg -s 640×480 -qscale 1 -vcodec mjpeg movie.avi

Well, FFMpeg is a monster. Getting it to work with some default settings was easy, but the number of parameters is breathtaking. I work in professional video on a daily basis but can hardly understand any of the plethora of options. There is extensive documentation on ffmpeg.org but most of it is meaningless unless you already know what it means.

Below is the basic structure of a Bash script. It requires the #!/bin/bash at the beginning

#!/bin/bash
clear
echo “Hello world.”

This script clears the screen and prints the words “Hello world”.

An excellent beginners’ tutorial is here and I’m not going to to duplicate it.
http://linuxcommand.org/wss0010.php

This process reminds me of teaching myself to program back in the 1980s. In a good way and a bad way. One frustratingly fatal error after another is a real test of application, and that’s the discipline, but teaching yourself may be very time-consuming but it is also very educational.
After some head scratching I got this to work. It’s a bit of a Frankenstein lash-up with bits of code I got from various places, but also with a little bit of my own intuition. I discovered by trial and error how to change the target directory on-the-fly within the script and this made it much simpler.

It’s still a bit clunky and needs the download-from-camera stage adding but it works. I decided to include absolute paths to the work directories and point the script at them in turn, and so far I am running the script using the command ./myscript from the current directory.

# !/bin/bash
cd /home/richard/Desktop/RBTest/newtest
x=1; for i in $(ls -t -r *JPG); do counter=$(printf %04d $x); ln “$i” /home/richard/Desktop/RBTest/testtemp/img_”$counter”.jpg; x=$(($x+1)); done
cd /home/richard/Desktop/RBTest/testtemp
ffmpeg -r 25 -i img_%04d.jpg -s 640×480 -qscale 1 -vcodec mjpeg movie.avi

The next bit I need to add is the download stage using gPhoto2 to get the images from the camera so that I can have a single workflow to download and assemble the thousands of images files in separate directories into a single usable video file.

Some of this coding documentation needs expanding and tidying up, but stay tuned for more soon.

Follow

Get every new post delivered to your Inbox.

Join 1,765 other followers