16 bit Images for Medium Dynamic Range

JD Smith

Traditionally, 16 bit imaging has been employed with source images of limited dynamic range to increase color and intensity sampling density, primarily to avoid artifacts like banding and posterization which may occur in strong processing. These effects are most readily seen by stretching contrast in smooth areas, like blue skies - see e.g. this article for some demonstrations. However, the degree to which a full 16 bit workflow is required to avoid banding artifacts is widely debated. Many common image editing and compositing tools work directly with 16 bit images.

Though not its traditional use, a 16 bit image format could also in principle be used to hold image data captured over a larger range of physical light intensity, without substantially increasing the sampling density over lower range images. In particular, since most digital cameras which can output RAW file formats utilize 12 bit analog-to-digital converters, and since these cameras also make it easy to shoot a set of 3 bracketed exposures separated by 2 stops (at -2,0,2EV), it is interesting to think of shoehorning the full available range of 3 RAW images, shot in a bracketed series over a factor of 16 in shutter speed, into a single 16 bit image. Such an image, which might contain at most 10-14 stops of available dynamic range, is dubbed a Medium Dynamic Range (MDR) image. It cannot accommodate the full dynamic range available in common outdoor scenes (up to 17 stops, see this article), but it could provide a useful and easily accessible extension of available range in certain types of photography.

A required assumption implicit in this method is the linearity of the CCD and CMOS detectors used in digital cameras. This issue is explored in the context of the example images presented below here. It should also be pointed out that the vast majority of 16 bit workflows offer no additional dynamic range over their 8 bit counterparts, instead just providing protection against numerical artifacts which result from insufficient sampling. To actually achieve larger dynamic range in an MDR 16 bit image, you must

  1. Have a scene with large enough intrinsic dynamic range (ratio of brightest to darkest intensity) to be of interest.
  2. Capture expanded dynamic range within the scene by bracketing exposures (which requires an unmoving camera).

You may be able to obtain expanded range directly from single RAW files, but that application is not pursued here.

Advantages

Why would you want to do this?

Disadvantages

Why wouldn't you want to do this?

An Example 16 bit MDR image

This example builds on the RAW comparison performed by Flemming Larson, using his set of 3 bracketed RAW exposures in the Nikon's NEF format, from a D70. Shown below are a variety of images derived from these 3 RAW files, including each one individually, tonemapped versions of the normally exposed image, of an HDR created directly from all 3 RAW files, as well as the MDR 16 bit image described here.

Mouse over the image name in the menu bar below to see the full image, and three full size crops.

All RAW NEF images are converted with Dave Coffin's DCRAW v8.54, using (for the Dark, Normal, and Bright images):

% dcraw -T -w file.nef

The 16 bit MDR TIFF was created by combining the dcraw linear output from all 3 files as described below.

Tonemapped images were created with Photomatix v2.3.3, using these setting (Photomatix set file), and as follows:

Normal (Toned)
By tonemapping directly from the normally exposed (0EV) RAW NEF file.
Bracket HDR (Toned)
By assembling all 3 bracketed RAW images (-2EV,0EV,2EV) into an HDR image, then tonemapping this image.
MDR 16 bit
By tonemapping directly from a single MDR 16 bit TIFF file (assembled as described below).

Thoughts

It's quite clear that three bracketed RAW images expand the available dynamic range over a single RAW exposure. While this scene has few regions with strong shadows, comparing the tonemapped dark RAW with the tonemapped HDR created from all 3 RAW images shows this clearly. There is strong additional detail in the trees out the window, and in the dark shadow regions. Yes, bracketing is useful. Had this scene had more dark shadow regions, or had the exposure been shifted downwards to better accommodate the window view, this difference would have been even more pronounced.

Comparing the last two images, it's also evident that the 16 bit MDR image created contains a very similar amount of detail at both ends of the intensity scale as the full HDR created from 3 bracketed RAW exposures (leaving aside issues of white balance and overall tone). To give an idea what the range is, the tops of the trees out the window have (linear) luminance values ~60,000, whereas the black book in the middle crop has a linear luminance value of ~15, and is not yet dominated by noise. This gives a very rough real world dynamic range of ~4000 (cf. roughly 1000 in a single RAW image), or about 12 stops. The actual number may be slightly higher than this, but this is a reasonable estimate.

How did I do it?

Creating the MDR 16 bit image was straightforward:
  1. Use dcraw to extract linear 16 bit data from each of the three RAW files. Note that the linear output of dcraw is proportional to the true light intensity at that position (see my notes on the linearity of the DSLR detectors). The command called was:
    % dcraw -4 -w -T file.nef
  2. Mark pixels to exclude from each of the three linear TIFF images as follows:
    • Dark image pixels below a lower cutoff were excluded.
    • Normal image pixels above an upper or below a lower cutoff were excluded.
    • Bright image pixels above an upper cutoff were excluded.
    • If a pixel is excluded in any color channel, it is excluded in all color channels.
    • If a given pixel was excluded from all three images, the exclusion is lifted for all three images.
  3. The upper and lower cutoffs used were the same for all 3 images, and all three colors (see this page on linearity). In this case, I used cutoffs of [1000,60000].
  4. Shift the bright image down by dividing by a factor of 16, and the normal image down by a factor of 4 (in these 12bit RAWs, the first 4 bits contain no useful detail, only interpolator artifacts).
  5. Average the remaining (non-excluded) pixels at each position and in each color channel.
  6. (Optionally) gamma-correct the image (I used gamma=2.2 in the above example).
  7. Write out a 16-bit TIFF file, packed to the brim with usable dynamic range.

An IDL routine which implements this algorithm is provided: merg16.pro. Equivalent C routines are being investigated.

Note that aside from dcraw's native white balance setting, which it performs based on the camera's measured white balance stored in the NEF file, no other color corrections were performed. Since ICC profile color spaces are tailored to low dynamic range, color managed workflow isn't truly possible here (except after tonemapping or range compression). The same limitation is true for all the various HDR applications (e.g. Photomatix).

Download

You can download this example 16bit MDR image, encoded as TIFF, and gzip-compressed:

 mdr16.tiff.gz [30MB]

This is a bare-bones TIFF image without much header material, and with no attempt made to correct white balance (above dcraw's efforts). Comments welcome.

JD Email Last modified: Mon Feb 19 13:36:44 MST 2007