Thursday, August 22, 2013

Averaging stack of identical photos

This article on other languages: Усреднение стека одинаковых кадров (russian version)

...or how to spend a lot of time to make from this picture

snowflake unprocessed jpeg

this:

snowflake averaged

It was a joke. First picture, macro photo of real snowflake, does not contain enough information to get from it details, visible at second picture. First shot - straight out of camera JPEG (one of seven identical shots of this snowflake, captured in series by steady camera). It have clearly visible effects of camera's built-in denoising and sharpening algorithms (sharpening can be turned off in camera menu, but denoising cannot). This shot was taken at minimum ISO for Canon Powershot A650 (80), but contains noticeable remains of supressed noise and blurring instead of small and subtle details. Second version of this picture - result of averaging all seven RAW shots with following software sharpening and noise reducing, assembly with masks from four variants of this picture with different denoising strength.

Signal-to-noise ratio, or why averaging works better than noise removing plugins


All this small, subtle and hardly noticeable details, like circular pattern of tiny spots around snowflake's center, or smooth transitions of dim colors at unfocused background, hidden in RAW files, untouched by in-camera denoising algorithms.



The problems is: all those details masked by luminocity and color noise in any single shot. But we have 7 identical serial shots, whose differences from each other (in ideal case) only by camera's noise patterns because of it's random nature. If we average these shots (i.e. take arithmetical mean from luminocity values of each pixel in each color channel from all 7 shots), we will get dramatic lowering of noise level while preserving all details in picture, or, in other words, raise picture's signal-to-noise ratio (SNR). If we apply world's best noise reduction software to any single shot from this serie instead of averaging all shots, we will get loss of picture details (strength of noise removing and details quality always correlated). But we need noise removing software also. Seven shots is insufficient to lower compact camera's noise to invisible levels. If we average 50 shots or more (as in following photo), we can easily skip noise reduction stage:

full moon photo

In shot of Pleiades star cluster (below) i averaged 142 shots. This allows to get visible dim stars and correct star colors (blue and orange) in very bad shooting conditions: strong light pollution within Moscow city and ISO 800 from compact camera sensor (because i used 6x optical zoom, i was forced to use short exposure, if i want avoid star trails, and to get maximum possible light during short exposure time, i set aperture to smallest F-number and set maximum sensor sensitivity).

Pleiades M45 star cluster photo

Here is 3 crops in 1:1 scale, without any noise removing or sharpening: one shot of series, averaging of 16 shots and averaging of all 142 shots:




Please make attention to color of the sky: at all three fragments i applied identical contrast curve, but correct black color achieved only by averaging. Even with best noise removing software, in any single shot of this serie i cannot see small and dim stars: their signal level lies below level of camera noise. If i average 16 shots, things are better, but many dim stars still invisible, and colors of stars distorted by color noise (because of their tiny size at picture). If i have only 16 shots in this series, after averaging i will need to do noise removing, and this will distort star colors even more.

Used software


Adobe Photoshop CS4, Noiseware denoising plugin, optionally - Hugin panoramic stitcher (from this package we need only commandline utility align_image_stack.exe").

In this table - snowflake fragment in 1:1 scale, to "single raw" and "7 raws averaged" applied sharpening (unsharp mask 300%, radius 1, threshold 0). In bottom row - same crops with Noiseware denoising at same settings for all fragments.

JPEGsingle RAW7 RAWs
JPEG + noisewaresingle RAW + noiseware7 RAWs + noiseware

Start of workflow: RAW converter


I will use Adobe Camera RAW. I load first shot from serie in converter and make all needed adjustments (white balance, exposure, brightness, etc.) Then i set linear contrast curve and disable noise removing and sharpening. Import first RAW in photoshop, save it as TIFF with 16bit depth per channel without compression, then open all other RAWs, apply to them settings of first shot (load presets - previous conversion), and save them as TIFFs.

Aligning


Now we face first problem. In ideal world, all shots of static scene, captured by steady camera, will be perfectly identical. In reality, more probably that there will be small shifts and/or rotations of shots and/or photographed object (especially in case of snowflake, captured outdoors).

Our shots must be aligned, and (important!) with subpixel precision, if we want good sharpness of resulting picture. Fortunately, subpixel precision easily achieved by simple trick: aligning software will work with shots, enlarged several times, and after averaging we shrink picture back to it's original size.



Let's load all saved TIFFs in photoshop as layers in one document (file - scripts - load files into stack). If shifts between frames serious and easy noticeable, firstly i do rough manual aligning, relative to bottom layer in the stack (for example, by snowflake's center). Manual aligning can be simplified, if for any shot we set blending mode = "difference" instead of "normal", then move shot in needed direction until image becomes darkest, which indicates minimum differences to underlying layer, and then restore blending mode back to "normal". After aligning i save .PSD with all layers, then crop snowflake, leaving some space around it in all directions. This crop i enlarge to 5x (500%), using "image size" command, with bicubic interpolation.

Aligning itself can be done in Photoshop or Hugin. They have different algorithms, and in some cases one of them works better than another.

1. For Photoshop, all layers selected (in "Layer" menu) and started "Edit - Auto align layers" with "auto" checkbox.

2. For Hugin, all layers after enlarging exported to disk as 16bit TIFFs (file - scripts - export layers to files), then alignment started by command "align_image_stack.exe -a aligned -v -v file1.tiff ... file7.tiff". When it's done, source TIFFs can be deleted, and aligned*.tiff loaded back in photoshop as stack of layers in one document.

Averaging


Aligned layers now ready for averaging (if between them still remains shifts or rotations, we need to try another aligning method). In Photoshop, averaging can be performed by one of 2 methods:

Method 1

, simpler and effective (available in Photoshop CS2 and newer) - using smart objects. We just select all source layers and apply menu command Layer > Smart Objects > Convert to Smart Object. Then, in menu Layer > Smart Objects > Stack Mode we choose "Mean" and wait until averaging was done. After that we apply Layer > Smart Objects > Rasterize and have single layer with averaged picture. This method especially useful if we have big (or odd) count of source layers.

Method 2



, suitable for older Photoshop versions or other graphics editing programs that support layers with changeable opacity. Also, this method is useful if we have some unwanted details in some source layers (for example, moving objects) - we can easily exlude them by adding white masks to these layers and remove such objects, painting masks with black brush. All we need now is set opacity to all layers in stack, in order to all of them will give equal contribution in final result. For all layers, started from bottom one and toward upper, opacity will be: 100%, 50%, 33%, 25%, 20%, 17%, 14% and so on, i.e. opacity of each layer = 100% divided by its number in the stack (counted from bottom). If we have too many layers, this method becomes ineffective because of rounding errors to integer opacity number. Simple way to avoid this - divide all shots to equal groups, average them within groups, and then average groups with groups (for example, 4 groups of 5 shots for 20 shots in serie). For seven shots in this example, i make two groups (4 and 3 shots), and average smaller group to bigger with opacity 43% (100/7*3). When it's done, all layers merged in one (layers - flatten).

After that, we shrink picture back to it's original size (20%). This can be done in "Image - Image size" menu with bicubic method, but i recommend to do this in several steps (by 70 - 80% in each step). When we do such serious shrink, too big number of source pixels merged into one resulting pixel, and, likely, bicubic interpolation uses only part of source pixels, and unused ones simply discarded and do not make any gain in resulting pixel's color and luminocity. If we carefully compare pictures, shrinked by both methods, with applied sharpening and at 300-400% zoom, we can notice small advantage in details of variant, shrinked in several steps.

Now, we can do same averaging for background (opening saved .PSD with manually aligned layers). Since for background, as a rule, sharpness not too important as for photographed object, we can not do any precise alignment, an average all layers as is, then flatten them, load on top of it our aligned object, move it to it's place and draw mask at object contour, and then merge it with background.

Now we can do any usual post processing. We need to apply sharpening to our picture (unsharp mask, smart sharpen, copy of layer with applied highpass filter in "overlay" mode or something else). Because we dramatically reduce noise level, we can apply quite strong sharpening, if it needs. Now we can finish processing, but i prefer to make 3 or 4 layer copies, apply to each of them denoising with different settings (aggresive for background, medium or delicate for object) and assembly all layers with hand drawed or semi-automatic masks (created with help of filters "find edges", "glowing edges"), for preserving good object details and make clean background without any traces of noise.

masks

snowflake macro photo

Another image, showing averaging technique benefits:

averaging technique benefits - gif animation

This averaging technique most often used in astrophotography, but will work with any static scene, if it's difficult to achieve good quality with just single shot. Last year i used this technique for enhance quality of night HDR sources. In this picture, for brightest exposure i averaged 2 shots, and 8 shots for 5 other exposures, total 42 source shots used:

Moscow Kremlin at night, HDR photo from bridge, with city center illumination and reflections in the river

Prints available at Artist website, RedBubble.com.
Licenses for commercial use - at Shutterstock.com, Marketplace.500px.com.

If you would like, you can subscribe to my blog by Email. I constantly work with snowflake, HDR and light painting photos, and you'll see new pictures and wallpapers instantly after publication:


Emails delivered by Google Feedburner service, and you can unsubscribe any time.

Author: Alexey Kljatov (E-Mail: chaoticmind75@gmail.com)


6 comments:

  1. Marvellous, inventive stuff. Love the results!

    ReplyDelete
  2. I just love the work that you do! Fantastic images!!!
    Thanks so very much on sharing your technique.. This inspires me to experiment :)

    ReplyDelete
  3. I do averaging in a different way - with pretty nice results however. In short - i align each layer against ba ckground layer manually , by setting blending mode of layer to be aligned to 'difference'. Then put all the layers in group, set all o them to 'lighten', make copy of that group and set all copied layers to 'darken'. then make rasterised version of 'lighten' group and use it as mask for that group. finally set groups to multiple and normal. I use that for recovering pics made in low-light conditions. example pic is here https://www.flickr.com/photos/two_face/14426319211/in/photostream/lightbox/

    ReplyDelete
  4. You did a great job of explaining your process and the steps taken. Thanks for taking the time to explain it to the rest of us.

    ReplyDelete