Interpreting OpenEXR Deep Pixels¶
Overview¶
Starting with version 2.0, the OpenEXR image file format supports deep images. In a regular, or flat image, every pixel stores at most one value per channel. In contrast, each pixel in a deep image can store an arbitrary number of values or samples per channel. Each of those samples is associated with a depth, or distance from the viewer. Together with the twodimensional pixel raster, the samples at different depths form a threedimensional data set.
The opensource OpenEXR file I/O library defines the file format for deep images, and it provides convenient methods for reading and writing deep image files. However, the library does not define how deep images are meant to be interpreted. In order to encourage compatibility among application programs and image processing libraries, this document describes a standard way to represent point and volume samples in deep images, and it defines basic compositing operations such as merging two deep images or converting a deep image into a flat image.
Definitions¶
Flat and Deep Images, Samples¶
For a singlepart OpenEXR file an image is the set of all channels in the file. For a multipart file an image is the set of all channels in the same part of the file.
A flat image has at most one stored value or sample per pixel per channel. The most common case is an RGB image, which contains three channels, and every pixel has exactly one \(R\), one \(G\) and one \(B\) sample. Some channels in a flat image may be subsampled, as is the case with luminancechroma images, where the luminance channel has a sample at every pixel, but the chroma channels have samples only at every second pixel of every second scan line.
A deep image can store an unlimited number of samples per pixel, and each of those samples is associated with a depth, or distance from the viewer.
A pixel at pixel space location \((x,y)\) in a deep image has \(n(x,y)\) samples in each channel. The number of samples varies from pixel to pixel, and any nonnegative number of samples, including zero, is allowed. However, all channels in a single pixel have the same number of samples.
The samples in each channel are numbered from \(0\) to \(n(x,y)  1\), and the expression \(S_{i}(c,x,y)\) refers to sample number \(i\) in channel \(c\) of the pixel at location \((x,y)\).
In the following we will for the most part discuss a single pixel. For readability we will omit the coordinates of the pixel; expressions such as \(n\) and \(S_{i}(c)\) are to be understood as \(n(x,y)\) and \(S_{i}(c,x,y)\) respectively.
Channel Names and Layers¶
The channels in an image have names that serve two purposes: specifying the intended interpretation of each channel, and grouping the channels into layers.
If a channel name contains one or more periods, then the part of the channel name that follows the last period is the base name. If a channel name contains no periods, then the entire channel name is the base name.
Examples:
the base name of channel
R
isR
the base name of channel
L1.L2.R
isR
If a channel name contains one or more periods, then the part of the channel name before the last period is the channel’s layer name. If a channel name contains no periods, then the layer name is an empty string.
Examples:
the layer name of channel
R
is the empty stringthe layer name of channel
L1.L2.R
isL1.L2
The set of all channels in an image that share the same layer name is called a layer.
The set of all channels in an image whose layer name is the empty string is called the base layer.
If the name of one layer is a prefix of the name of another layer, then the first layer encloses the second layer, and the second layer is nested in the first layer. Since the empty string is a prefix of any other string, the base layer encloses all other layers.
A layer directly encloses a second layer if there is no third layer that is nested in the first layer and encloses the second layer.
Examples:
Layer
L1
encloses layersL1.L2
andL1.L2.L3
Layer
L1
directly encloses layerL1.L2
, butL1
does not directly encloseL1.L2.L3
Alpha, Color, Depth and Auxiliary Channels¶
A channel whose base name is A
, AR
,
AG
or AB
is an alpha channel. All
samples must be greater than or equal to zero, and less than or equal to
one.
A channel whose base name is R
, G
, B
, or
Y
is a color channel.
A channel whose full name is Z
or ZBack
, is a
depth channel. All samples in a depth channel must be greater than
or equal to zero.
A channel that is not an alpha, color or depth channel is an auxiliary channel.
Required Depth Channels¶
The base layer of a deep image must include a depth channel that is
called Z
.
The base layer of a deep image may include a depth channel called
ZBack
. If the base layer does not include one, then a
ZBack
channel can be generated by copying the Z
channel.
Layers other than the base layer may include channels called Z
or ZBack
, but those channels are auxiliary channels and
do not determine the positions of any samples in the image.
Sample Locations, Point and Volume Samples¶
The depth samples \(S_{i}\left( Z \right)\) and \(S_{i}(ZBack)\)
determine the positions of the front and the back of sample number
i
in all other channels in the same pixel.
If \(S_{i}\left( Z \right) \geq S_{i}\left( \text{ZBack} \right)\),
then sample number i
in all other channels covers the single
depth value \(z = S_{i}\left( Z \right)\), where z
is the
distance of the sample from the viewer. Sample number i
is
called a point sample.
If \(S_{i}\left( Z \right) < S_{i}\left( \text{ZBack} \right)\),
then sample number i
in all other channels covers the half open
interval
\(S_{i}\left( Z \right) \leq z < S_{i}\left( \text{ZBack} \right)\).
Sample number i
is called a volume sample.
\(S_{i}\left( Z \right)\) is the sample’s front
and\(\ S_{i}\left( \text{ZBack} \right)\) is the sample’s
back.
Point samples are used to represent the intersections of surfaces with a pixel. A surface intersects a pixel at a welldefined distance from the viewer, but the surface has zero thickness. Volume samples are used to represent the intersections of volumes with a pixel.
Required Alpha Channels¶
Every color or auxiliary channel in a deep image must have an associated alpha channel.
The associated alpha channel for a given color or auxiliary channel,
c
, is found by looking for a matching alpha channel (see
below), first in the layer that contains c
, then in the directly
enclosing layer, then in the layer that directly encloses that layer,
and so on, until the base layer is reached. The first matching alpha
channel found this way becomes the alpha channel that is associated with
c
.
Each color our auxiliary channel matches an alpha channel, as shown in the following table:
Color or auxiliary channel base name 
Matching alpha channel base name 









(any auxiliary channel) 

Example: The following table shows the list of channels in a deep image, and the associated alpha channel for each color or auxiliary channel.
Channel name 
Associated alpha channel 





















Sorted, NonOverlapping and Tidy Images¶
The samples in a pixel may or may not be sorted according to depth, and the sample depths or depth ranges may or may not overlap each other.
A pixel in a deep image is sorted if for every i
and
j
with i
< j
,
A pixel in a deep image is nonoverlapping if for every \(i\) and \(j\) with \(i \neq j\),
A pixel in a deep image is tidy if it is sorted and nonoverlapping.
A deep image is sorted if all of its pixels are sorted; it is nonoverlapping if all of its pixels are nonoverlapping; and it is tidy if all of its pixels are tidy.
The images stored in an OpenEXR file are not required to be tidy. Some deep image processing operations, for example, flattening a deep image, require tidy input images. However, making an image tidy loses information, and some kinds of data cannot be represented with tidy images, for example, object identifiers or motion vectors for volume objects that pass through each other.
Some application programs that read deep images can run more efficiently with tidy images. For example, in a 3D renderer that uses deep images as shadow maps, shadow lookups are faster if the samples in each pixel are sorted and nonoverlapping.
Application programs that write deep OpenEXR files can add a
deepImageState attribute to the header to let file readers know if the
pixels in the image are tidy or not. The attribute is of type
DeepImageState
, and can have the following values:
Value 
Interpretation 


Samples may not be sorted, and overlaps are possible. 

Samples are sorted, but overlaps are possible. 

Samples do not overlap, but may not be sorted. 

Samples are sorted and do not overlap. 
If the header does not contain a deepImageState attribute, then file
readers should assume that the image is MESSY
. The OpenEXR file I/O
library does not verify that the samples in the pixels are consistent
with the deepImageState attribute. Application software that handles
deep images may assume that the attribute value is valid, as long as the
software will not crash or lock up if any pixels are inconsistent with
the deepImageState.
Alpha and Color as Functions of Depth¶
Given a color channel, c
, and its associated alpha channel,
\(\alpha\), the samples \(S_{i}\left( c \right)\),
\(S_{i}\left( \alpha \right)\), \(S_{i}\left( Z \right)\) and
\(S_{i}\left( \text{ZBack} \right)\) together represent the
intersection of an object with a pixel. The color of the object is
\(S_{i}\left( c \right)\), its opacity is
\(S_{i}\left( \alpha \right)\), and the distances of its front and
back from the viewer are indicated by \(S_{i}\left( Z \right)\) and
\(S_{i}\left( \text{ZBack} \right)\) respectively.
One Sample¶
We now define two functions, \(z \longmapsto \alpha_{i}(z)\), and
\(z \longmapsto c_{i}(z)\), that represent the opacity and color of
the part of the object whose distance from the viewer is no more than
z
. In other words, we divide the object into two parts by
splitting it at distance \(z\); \(\alpha_{i}(z)\) and
\(c_{i}(z)\) are the opacity and color of the part that is closer to
the viewer.
For a point sample, \(\alpha_{i}(z)\) and \(c_{i}(z)\) are step functions:
For a volume sample, we define a helper function \(x(z)\) that consists of two constant segments and a linear ramp:
With this helper function, \(\alpha_{i}(z)\) and \(c_{i}(z)\) are defined as follows:
Note that the second case in the definition of \(c_{i}\left( z \right)\) is the limit of the first case as \(S_{i}\left( \alpha \right)\) approaches zero.
The figure below shows an example of \(\alpha_{i}\left( z \right)\)
and \(c_{i}\left( z \right)\) for a volume sample. Alpha and color
are zero up to \(Z\), increase gradually between Z
and
ZBack
, and then remain constant.
Whole Pixel¶
If a pixel is tidy, then we can define two functions, \(z \longmapsto A(z)\), and \(z \longmapsto C(z)\), that represent the total opacity and color of all objects whose distance from the viewer is no more than \(z\): if the distance \(z\) is inside a volume object, we split the object at \(z\). Then we use “over” operations to composite all objects that are no further away than \(z\).
Given a foreground object with opacity \(\alpha_{f}\) and color \(c_{f}\), and a background object with opacity \(\alpha_{b}\) and color \(c_{b}\), an “over” operation computes the total opacity and color, \(\alpha\) and \(c\), that result from placing the foreground object in front of the background object:
We define two sets of helper functions:
With these helper functions, \(A\left( z \right)\) and \(C(z)\) look like this:
The figure below shows an example of \(A(z)\) and \(C(z)\).
Sample number i
is a volume sample; its ZBack
is
greater than its \(Z\). Alpha and color increase gradually between
Z
and ZBack
and then remain constant. Sample
number \(i + 1\), whose Z
and ZBack
are
equal, is a point sample where alpha and color discontinuously jump to a
new value.
Basic Deep Image Operations¶
Given the definitions above, we can now construct a few basic deep image processing operations.
Splitting a Volume Sample¶
Our first operation is splitting volume sample number i
of a
pixel at a given depth, \(z\), where:
The operation replaces the original sample with two new samples. If the first of those new samples is composited over the second one, then the total opacity and color are the same as in the original sample.
For the depth channels, the new samples are:
For a color channel, c
, and its associated alpha channel,
\(\alpha\), the new samples are:
If it is not done exactly right, splitting a sample can lead to large rounding errors for the colors of the new samples when the opacity of the original sample is very small. For C++ code that splits a volume sample in a numerically stable way, see Example: Splitting a Volume Sample.
Merging Overlapping Samples¶
In order to make a deep image tidy, we need a procedure for merging two
samples that perfectly overlap each other. Given two samples, i
and j
, with
and
we want to replace those samples with a single new sample that has an appropriate opacity and color.
For two overlapping volume samples, the opacity and color of the new sample should be the same as what one would get from splitting the original samples into a very large number of shorter subsamples, interleaving the subsamples, and compositing them back together with a series of “over” operations.
For a color channel, c
, and its associated alpha channel,
\(\alpha\), we can compute the opacity and color of the new sample
as follows:
where
with \(k = i\) or \(k = j\), and
Evaluating the expressions above directly can lead to large rounding errors when the opacity of one or both of the input samples is very small. For C++ code that computes\(\ S_{i,new}\left( \alpha \right)\) and \(S_{i,new}\left( c \right)\) in a numerically robust way, see Example: Merging Two Overlapping Samples.
For details on how the expressions for \(S_{i,new}\left( \alpha \right)\) and \(S_{i,new}\left( c \right)\), can be derived, see Peter Hillman’s paper, “The Theory of OpenEXR Deep Samples”
Note that the expressions for computing \(S_{i,new}\left( \alpha \right)\) and \(S_{i,new}\left( c \right)\) do not refer to depth at all. This allows us to reuse the same expressions for merging two perfectly overlapping (that is, coincident) point samples.
A point sample cannot perfectly overlap a volume sample; therefore point samples are never merged with volume samples.
Making an Image Tidy¶
An image is made tidy by making each of its pixels tidy. A pixel is made tidy in three steps:
Split partially overlapping samples: if there are indices \(i\) and \(j\) such sample \(i\) is either a point or a volume sample, sample \(j\) is a volume sample, and \(S_{j}\left( Z \right) < S_{i}\left( Z \right) < S_{j}(ZBack)\), then split sample \(j\) at \(S_{i}\left( Z \right)\) as shown on page 10 of this document. Otherwise, if there are indices \(i\) and \(j\) such that samples \(i\) and \(j\) are volume samples, and \(S_{j}\left( Z \right) < S_{i}\left( \text{ZBack} \right) < S_{j}(ZBack)\), then split sample \(j\) at \(S_{i}\left( \text{ZBack} \right)\). Repeat this until there are no more partially overlapping samples.
Merge overlapping samples: if there are indices \(i\) and \(j\) such that samples \(i\) and \(j\) overlap perfectly, then merge those two samples as shown in Merging Overlapping Samples above. Repeat this until there are no more perfectly overlapping samples.
Sort the samples according to
Z
andZBack
(see Sorted, NonOverlapping and Tidy Images).
Note that this procedure can be made more efficient by first sorting the samples, and then splitting and merging overlapping samples in a single fronttoback sweep through the sample list.
Merging Two Images¶
Merging two deep images forms a new deep image that represents all of the objects contained in both of the original images. Conceptually, the deep image “merge” operation is similar to the “over” operation for flat images, except that the “merge” operation does not distinguish between a foreground and a background image.
Since deep images are not required to be tidy, the “merge” operation is trivial: for each output pixel, concatenate the sample lists of the corresponding input pixels.
Flattening an Image¶
Flattening produces a flat image from a deep image by performing a fronttoback composite of the deep image samples. The “flatten” operation has two steps:
Make the deep image tidy.
For each pixel, composite sample
0
over sample1
. Composite the result over sample2
, and so on, until sample \(n  1\) is reached.Note that this is equivalent to computing \(A\left( max\left( S_{n  1}\left( Z \right),S_{n  1}\left( \text{ZBack} \right) \right) \right)\) for each alpha channel and \(C\left( max\left( S_{n  1}\left( Z \right),S_{n  1}\left( \text{ZBack} \right) \right) \right)\) for each color or auxiliary channel.
There is no single “correct” way to flatten the depth channels. The most
useful way to handle Z
and ZBack
depends on how
the flat image will be used. Possibilities include, among others:
Flatten the
Z
channel as if it was a color channel, usingA
as the associated alpha channel. For volume samples, replaceZ
with the average ofZ
andZBac
k before flattening. Either discard theZBack
channel, or use the back of the last sample, \(max\left( S_{n  1}\left( Z \right),S_{n  1}\left( \text{ZBack} \right) \right)\), as theZBack
value for the flat image.Treating
A
as the alpha channel associated withZ
, find the depth where \(A(z)\) becomes 1.0 and store that depth in theZ
channel of the flat image. If \(A(z)\) never reaches 1.0, then store either infinity or the maximum possible finite value in the flat image.Treating
A
as the alpha channel associated withZ
, copy the front of the first sample with nonzero alpha and the front of the first opaque sample into theZ
andZBack
channels of the flat image.
Opaque Volume Samples¶
Volume samples represent regions along the \(z\) axis of a pixel that are filled with a medium that absorbs light and also emits light towards the camera. The intensity of light traveling through the medium falls off exponentially with the distance traveled. For example, if a one unit thick layer of fog absorbs half of the light and transmits the rest, then a two unit thick layer of the same fog absorbs three quarters of the light and transmits only one quarter. Volume samples representing these two layers would have alpha 0.5 and 0.75 respectively. As the thickness of a layer increases, the layer quickly becomes nearly opaque. A fog layer that is twenty units thick transmits less than one millionth of the light entering it, and its alpha is 0.99999905. If alpha is represented using 16bit floatingpoint numbers, then the exact value will be rounded to 1.0, making the corresponding volume sample completely opaque. With 32bit floatingpoint numbers, the alpha value for a 20 unit thick layer can still be distinguished from 1.0, but for a 25 unit layer, alpha rounds to 1.0. At 55 units, alpha rounds to 1.0 even with 64bit floatingpoint numbers.
Once a sample effectively becomes opaque, the true density of the
lightabsorbing medium is lost. A oneunit layer of a light fog might
absorb half of the light while a oneunit layer of a dense fog might
absorb three quarters of the light, but the representation of a 60unit
layer as a volume sample is exactly the same for the light fog, the
dense fog and a gray brick. For a sample that extends from Z
to
ZBack
, the function \(\alpha(z)\) evaluates to 1.0
for any \(z > Z\). Any object within this layer would be completely
hidden, no matter how close it was to the front of the layer.
Application software that writes deep images should avoid generating very deep volume samples. If the program is about to generate a sample with alpha close to 1.0, then it should split the sample into multiple subsamples with a lower opacity before storing the data in a deep image file. This assumes, of course, that the software has an internal volume sample representation that can distinguish very nearly opaque samples from completely opaque ones, so that splitting will produce subsamples with alpha significantly below 1.0.
Appendix: C++ Code¶
Example: Splitting a Volume Sample¶
1#include <algorithm>
2#include <cassert>
3#include <cmath>
4#include <limits>
5
6using namespace std;
7
8void
9splitVolumeSample (
10 float a,
11 float c, // Opacity and color of original sample
12 float zf,
13 float zb, // Front and back of original sample
14 float z, // Position of split
15 float& af,
16 float& cf, // Opacity and color of part closer than z
17 float& ab,
18 float& cb) // Opacity and color of part further away than z
19{
20 //
21 // Given a volume sample whose front and back are at depths zf and
22 // zb respectively, split the sample at depth z. Return the opacities
23 // and colors of the two parts that result from the split.
24 //
25 // The code below is written to avoid excessive rounding errors when
26 // the opacity of the original sample is very small:
27 //
28 // The straightforward computation of the opacity of either part
29 // requires evaluating an expression of the form
30 //
31 // 1  pow (1a, x).
32 //
33 // However, if a is very small, then 1a evaluates to 1.0 exactly,
34 // and the entire expression evaluates to 0.0.
35 //
36 // We can avoid this by rewriting the expression as
37 //
38 // 1  exp (x \* log (1a)),
39 //
40 // and replacing the call to log() with a call to the function log1p(),
41 // which computes the logarithm of 1+x without attempting to evaluate
42 // the expression 1+x when x is very small.
43 //
44 // Now we have
45 //
46 // 1  exp (x \* log1p (a)).
47 //
48 // However, if a is very small then the call to exp() returns 1.0, and
49 // the overall expression still evaluates to 0.0. We can avoid that
50 // by replacing the call to exp() with a call to expm1():
51 //
52 // expm1 (x \* log1p (a))
53 //
54 // expm1(x) computes exp(x)  1 in such a way that the result is
55 // even if x is very small.
56 //
57
58 assert (zb > zf && z >= zf && z <= zb);
59
60 a = max (0.0f, min (a, 1.0f));
61
62 if (a == 1)
63 {
64 af = ab = 1;
65 cf = cb = c;
66 }
67 else
68 {
69 float xf = (z  zf) / (zb  zf);
70 float xb = (zb  z) / (zb  zf);
71
72 if (a > numeric_limits<float>::min ())
73 {
74 af = expm1 (xf * log1p (a));
75 cf = (af / a) * c;
76
77 ab = expm1 (xb * log1p (a));
78 cb = (ab / a) * c;
79 }
80 else
81 {
82 af = a * xf;
83 cf = c * xf;
84
85 ab = a * xb;
86 cb = c * xb;
87 }
88 }
89}
Example: Merging Two Overlapping Samples¶
1#include <algorithm>
2#include <cassert>
3#include <cmath>
4#include <limits>
5using namespace std;
6void
7mergeOverlappingSamples (
8 float a1,
9 float c1, // Opacity and color of first sample
10 float a2,
11 float c2, // Opacity and color of second sample
12 float& am,
13 float& cm) // Opacity and color of merged sample
14{
15 //
16 // This function merges two perfectly overlapping volume or point
17 // samples. Given the color and opacity of two samples, it returns
18 // the color and opacity of the merged sample.
19 //
20 // The code below is written to avoid very large rounding errors when
21 // the opacity of one or both samples is very small:
22 //
23 // * The merged opacity must not be computed as 1  (1a1) \*
24 // (1a2). If a1 and a2 are less than about half a
25 // floatingpoint epsilon, the expressions (1a1) and (1a2)
26 // evaluate to 1.0 exactly, and the merged opacity becomes
27 // 0.0. The error is amplified later in the calculation of the
28 // merged color.
29 //
30 // Changing the calculation of the merged opacity to a1 + a2 
31 // a1*a2 avoids the excessive rounding error.
32 //
33 // * For small x, the logarithm of 1+x is approximately equal to
34 // x, but log(1+x) returns 0 because 1+x evaluates to 1.0
35 // exactly. This can lead to large errors in the calculation of
36 // the merged color if a1 or a2 is very small.
37 //
38 // The math library function log1p(x) returns the logarithm of
39 // 1+x, but without attempting to evaluate the expression 1+x
40 // when x is very small.
41 //
42
43 a1 = max (0.0f, min (a1, 1.0f));
44 a2 = max (0.0f, min (a2, 1.0f));
45
46 am = a1 + a2  a1 * a2;
47
48 if (a1 == 1 && a2 == 1) { cm = (c1 + c2) / 2; }
49 else if (a1 == 1)
50 {
51 cm = c1;
52 }
53 else if (a2 == 1)
54 {
55 cm = c2;
56 }
57 else
58 {
59 static const float MAX = numeric_limits<float>::max ();
60
61 float u1 = log1p (a1);
62 float v1 = (u1 < a1 * MAX) ? u1 / a1 : 1;
63
64 float u2 = log1p (a2);
65 float v2 = (u2 < a2 * MAX) ? u2 / a2 : 1;
66
67 float u = u1 + u2;
68 float w = (u > 1  am < u * MAX) ? am / u : 1;
69
70 cm = (c1 * v1 + c2 * v2) * w;
71 }
72}