Separate your filters! Separability, SVD and low-rank approximation of 2D image processing filters

In this blog post, I explore concepts around separable convolutional image filters: how can we check if a 2D filter (like convolution, blur, sharpening, feature detector) is separable, and how to compute separable approximations to any arbitrary 2D filter represented in a numerical / matrix form. I’m not covering any genuinely new research, but think it’s a really cool, fun, visual, interesting, and very practical topic, while being mostly unknown in the computer graphics community.

Throughout this post, will explore some basic use of Singular Value Decomposition, one of the most common and powerful linear algebra concepts (used in all sub-fields of computer science, from said image processing, through recommendation systems, ML, to ranking web pages in search engines). We will see how it can be used to analyze 2D image processing filters – check if they are separable, approximate the non-separable ones, and will demonstrate how to use those for separable, juicy bokeh.

Note: this post turned out to be a part one of a mini-series, and has a follow-up post here – be sure to check it out!

Post update: Using separable filters for bokeh approxmation is not a new idea – Olli Niemitalo pointed out this paper “Fast Bokeh effects using low-rank linear filters” to me, which doesn’t necessarily feature any more details on the technique, but has some valuable timings/performance/quality comparisons to the stochastic sampling, if you are interested in using it in practice, check it out – along my follow-up post.

Intro / motivation

Separable filters are one of the most useful tools in image processing and they can turn algorithms from “theoretical and too expensive” to practical under the same computational constraints. In some other cases, ability to use a separable filter can be the tipping point that makes some “interactive” (or offline) technique real-time instead.

Let’s say we want to filter an image – sharpen it, blur, maybe detect the edges or other features. Given a 2D image filter of size MxN, computing the filter would require MxN independent, sequential memory accesses (often called them “taps”), accompanied by MxN multiply-add operations. For large filters, this can get easily prohibitively expensive, and we get quadratic scaling with the filter spatial extent… This is where separable filters can come to the rescue.

If a filter is separable, we can decompose such filter into a sequence of two 1D filters in different directions (usually horizontal, and then vertical). Each pass filters with a 1D filter, first with M, and then the second pass with N taps, in total M+N operations. This requires storing the intermediate results – either in memory, or locally (line buffering, tiled local memory optimizations). While you pay the cost of storing the intermediate results and synchronizing the passes, you get linear and not quadratic scaling. Therefore, typically for any filter sizes larger than ~4×4 (depends on the hardware, implementation etc) using separable filters is going to be significantly faster than the naive, non-separable approach.

But how do we find separable filters?

Simplest analytical filters

Two simplest filters that are bread and butter of any image processing are the box filter, and the Gaussian filter. Both of them are separable, but why?

The box filter seems relatively straightforward – there are two ways of thinking about it: a) if you intuitively think about what happens if you just “smear” a value across horizontal, and then vertical axis, you will get a box; b) mathematically, there are two conditions on the value of the filter, there are two conditions on both dimensions that are independent:

f(x, y) = select(abs(x) < extent_x && abs(y) < extent_y), v, 0)
f(x, y) = select(abs(x) < extent_x, v, 0) * select(abs(y) < extent_y, 1, 0) 

This last product is the equation of separability – our 2D function is a product of a horizontal, and a vertical function.

Gaussian is a more interesting one, and it’s not immediately obvious (“what does the normal distribution have to do with separability?”), so let’s write out the maths.

For simplicity, I will use a non-normalized Gaussian function (usually we need to re-normalize after the discretization anyway) and assume standard deviation of 1/sqrt(2) so that it “disappears” from the denominator.

f(x, y) = exp(-sqrt(x^2 + y^2)^2) // sqrt(x^2 + y^2) is distance of filtered pixel
f(x, y) = exp(-(x^2 + y^2))       // from the filter center.
f(x, y) = exp(-x^2 + -y^2)
f(x, y) = exp(-x^2) * exp(-y^2)

The last step follows from the “definition” of exponential (exponential of sum is a product of exponentials), and afterwards we can see that again, our 2D filter is product of two 1D Gaussian filters.

Gaussian filter matrix

“Numerical” / arbitrary filters

We started with two trivial, analytical filters, but what do we do if we have an arbitrary filter given to us in a numerical form – just a matrix full of numbers?

We can plot them, analyze them, but how do we check if given filter is separable? Can we try to separate (maybe at least approximately) any filter?

Can you make this filter separable? Spoiler: yes, it’s just the Gaussian above, but how do we tell?

Linear algebra to the rescue

Let’s rephrase our problem in terms of linear algebra. Given a matrix MxN, we would like to express it as a product of two matrices, Mx1 and 1xN. Let’s re-draw our initial diagram:

There are a few ways of looking at this problem (they are roughly equivalent – but writing them all out should help build some intuition for it):

  • We would like to find a row Mx1 that can be replicated with different multipliers N times giving us original matrix,
  • Another way of saying the above is that we would like our MxN filter matrix rows/columns to be linearly dependent,
  • We would like our source matrix to be a matrix with rank of one,
  • If it’s impossible, then we would like to get a low-rank approximation of the matrix MxN.

To solve this problem, we will use one of the most common and useful linear algebra tools, Singular Value Decomposition.

Proving that it gives “good” results (optimal in least-squarest sense, at least for some norms) is beyond the scope of this post, but if you’re curious, check out the proofs on the wikipedia page.

Singular Value Decomposition

Specifically, the singular value decomposition of an m\times n real or complex matrix \mathbf {M}  is a factorization of the form {\displaystyle \mathbf {U\Sigma V^{*}} }, where \mathbf {U}  is an m\times m real or complex unitary matrix,\mathbf{\Sigma} is an m\times nrectangular diagonal matrix with non-negative real numbers on the diagonal, and \mathbf {V}  is an n\times n real or complex unitary matrix. If \mathbf {M}  is real, \mathbf {U}  and {\displaystyle \mathbf {V} =\mathbf {V^{*}} } are real orthonormal matrices.


How does SVD help us here? We can look at the final matrix as a sum of many matrix multiplications, each of a single row/column from the U and V matrix, multiplied by the diagonal entry from the matrix E. SVD diagonal matrix E contains entries (called “singular values”) on the diagonal only and they are sorted from the highest value, to the lowest. If we look at the final matrix as a sum of different rank 1 matrices, weighted by their singular values, the singular values correspond to proportion of the “energy”/”information” of the original matrix contents.

SVD decomposes any matrix into three matrices. The middle one is purely diagonal one, and products of the values of the diagonal with columns/rows of the other matrices create multiple rank1 matrices. If we add all of them together, we are going to get our source matrix back.

If our original matrix is separable, then it is rank 1, and we will have only single singular value. Even if they are not zero (but significantly smaller), and we truncate all coefficients except for the first one, we will get a separable approximation of the original filter matrix!

Note: SVD is a concept and type of matrix factorization / decomposition, not an algorithm to compute one. Computing SVD efficiently and numerically stable is generally difficult (requires lots of careful consideration) and I am not going to cover it – but almost every linear algebra package or library has it implemented, and I assume we just use one.

Simple, separable examples

Let’s start with our two simple examples that we know that are separable – box and Gaussian filter. We will be using Python and numpy / matplotlib. This is just a warm-up, so feel free to skip those two trivial cases and jump to the next section.

import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage'seaborn-whitegrid')

x, y = np.meshgrid(np.linspace(-1, 1, 50), np.linspace(-1, 1, 50))
box = np.array(np.logical_and(np.abs(x) < 0.7, np.abs(y) < 0.7),dtype='float64') 
gauss = np.exp(-5 * (x * x + y * y))
plt.matshow(np.hstack((gauss, box)), cmap='plasma')

Let’s see what SVD will do with the box filter matrix and print out all the singular values, and and the U and E row / column associated with the first singular value:

U, E, V = np.linalg.svd(box)
[34.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
[0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
[0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17 -0.17
 -0.17 -0.17 -0.17 -0.17 -0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00]

Great, we get only one singular value – which means that it is a rank 1 matrix and fully separable.

On the other hand, the fact that our columns / vectors are negative is a bit peculiar (it is implementation dependent), but the signs negate each other; so we can simply multiply both of them by -1. We can also get rid of the singular value and embed it into our filters, for example bymultiplying both by sqrt of it to make them comparable in value:

[-0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 1.00 1.00 1.00 1.00 1.00
 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
 1.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00]
[-0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 1.00 1.00 1.00 1.00 1.00
 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
 1.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00]

Apart from cute -0.00s, we get exactly the separable filter we would expect. We can do the same with the Gaussian filter matrix and arrive at the following:

[0.01 0.01 0.01 0.02 0.03 0.04 0.06 0.08 0.10 0.14 0.17 0.22 0.27 0.33
 0.40 0.47 0.55 0.63 0.70 0.78 0.84 0.90 0.95 0.98 1.00 1.00 0.98 0.95
 0.90 0.84 0.78 0.70 0.63 0.55 0.47 0.40 0.33 0.27 0.22 0.17 0.14 0.10
 0.08 0.06 0.04 0.03 0.02 0.01 0.01 0.01]
[0.01 0.01 0.01 0.02 0.03 0.04 0.06 0.08 0.10 0.14 0.17 0.22 0.27 0.33
 0.40 0.47 0.55 0.63 0.70 0.78 0.84 0.90 0.95 0.98 1.00 1.00 0.98 0.95
 0.90 0.84 0.78 0.70 0.63 0.55 0.47 0.40 0.33 0.27 0.22 0.17 0.14 0.10
 0.08 0.06 0.04 0.03 0.02 0.01 0.01 0.01]

Now if we take an outer product (using function np.outer()) of those two vectors, we arrive at filters almost exactly same as the original (+/- the numerical imprecision differences).

Gaussian and its rank1 approximation – unsurprisingly they look exactly the same.

Non-separable filters

“D’oh, you took filters that you know are separable, fed through some complicated linear algebra machinery and verified that yes, they are separable – what a waste of time!” you might think – and you are right, it was a demonstration of the concept, but has so far not too much practical value. So let’s look at a few more interesting filters – ones that we know are not separable, but maybe are almost separable?

I picked four examples: circular, hexagon, exponential/Laplace distribution, and a difference of Gaussians. Circular and hexagon are interesting as they are very useful for bokeh; exponential/Laplace is somewhat similar to long-tailed BRDF like GGX, and can be used for some “glare”/veil type of effects, and the difference of Gaussians (laplacian of Gaussians) is both a commonly used band-pass filter, as well as can be used for a busy, “donut”-type bokeh.

circle = np.array(x*x+y*y < 0.8, dtype='float64')
exponential = np.exp(-3 * np.sqrt(x*x+y*y))

hexagon = np.array(np.logical_and(np.logical_and(np.logical_and(np.logical_and(np.logical_and(np.logical_and(np.logical_and(x+y < 0.9*np.sqrt(2), x+y > -0.9*np.sqrt(2)),x-y < 0.9*np.sqrt(2)),-x+y < 0.9*np.sqrt(2)),x<0.9),x>-0.9),y<0.9),y>-0.9), dtype='float64')

difference_of_gaussians = np.exp(-5 * (x*x+y*y))-np.exp(-6 * (x*x+y*y))
difference_of_gaussians /= np.max(difference_of_gaussians)

plt.matshow(np.vstack((np.hstack((circle, hexagon)), np.hstack((exponential, difference_of_gaussians)))), cmap='plasma')

Now, if we try to do a rank 1 approximation by taking just the first singular value, we end up with results looking like this:

They look terrible, nothing like the original ones! Ok, so now we know they are not separable, which concludes my blog post…

I’m obviously kidding, let’s explore how “good” are those approximations and how to make them better. 🙂

Low~ish rank approximations

I mentioned that singular values represent percentage of “energy” or “information” in the original filter. If the filter is separable, all energy is in the first singular value. If it’s not, then it will be distributed among multiple singular values.

We can plot the magnitude of singular values (I normalized all the plots so they integrate to 1 like a density function and don’t depend on how bright a filter is) to see what is going on. Here are two plots: of the singular values, and “cumulative” (reaching 1 means 100% of the original filter preserved). There are 50 singular values in a 50×50 matrix, but after some point they reach zero, so I truncated the plots.

The second plot is very informative and tells us what we can expect from those low rank approximations.

To reconstruct fully the original filter, for difference of Gaussians we need just two rank 1 matrices (this is not surprising at all! DoG is difference of two Gaussians, separable / rank 1 filters), for Laplacian we get very close with just 2-3 rank 1 matrices. On the other hand, for circular and hexagonal filter we need many more, to reach just 75% accuracy we need 3-4 filters, and 14 to get 100% accurate representation…

How do such higher rank approximations look like? You can think of them as sum of N independent separable filters (independent vertical + horizontal passes). In practice, you would probably implement them as a single horizontal pass that produces N outputs, followed by a vertical one that consumes those, multiplies them with appropriate vertical filters, and produces a single output image.

Note 1: In general and in most cases, the “sharper” (like the immediate 0-1 transition of hex or a circle) or less symmetric a filter is, the more difficult it will be to approximate it. In a way, it is similar to other common decomposition, like the Fourier transform.

Note 2: You might wonder – if a DoG can be expressed as a sum of two trivially separable Gaussians, why SVD rank 1 approximation doesn’t look like one of the Gaussians? The answer is that with just a single Gaussian, the resulting filter will be more different from the source than our separable weird 4-dot pattern. SVD decomposes so that each progressive low rank decomposition is optimal (in L2 sense), in this case it means adding two non-intuitive, “weird” separable patterns together, where the second one contributes 4x less than the first one.

Low-rank approximations in practice

Rank N approximation of an MxM filter would have performance cost of O(2M*N), but additional memory cost of N * original image storage (if not doing any common optimizations like line buffering or local/tiled storage). If we need to approximate 50×50 filter, then even taking 10 components could be “theoretically” worth it; and for much larger filters it would be also worth in practice (and still faster and less problematic than FFT based approaches). One thing worth noting is that in the case of presented filters, we reach the 100% exact accuracy with just 14 components (as opposed to full 50 singular values!).

But let’s look at more practical, 1, 2, 3, 4 component approximations.

def low_rank_approx(m, rank = 1):
  U,E,V = np.linalg.svd(m)
  mn = np.zeros_like(m)
  score = 0.0
  for i in range(rank):
    mn += E[i] * np.outer(U[:,i], V[i,:])
    score += E[i]
  print('Approximation percentage, ', score / np.sum(E))
  return mn

Even with those maximum four components, we see a clear “diminishing returns” effect, and that the differences get smaller between 3 and 4 components as compared to 2 and 3. This is not surprising, since our singular values are sorted descending, so every next approximation will add less of information. Looking at the results, they might be actually not too bad. We see some “blockiness”, and even worse negative values (“ringing”), but how would that perform on real images?

Visual effect of low-rank approximation – boookeh!

To demonstrate how good/bad this works in practice, I will use filters to blur an image (from the Kodak dataset). On LDR images the effects are not very visible, so I will convert it to fake HDR by applying a strong gamma function of 7, doing the filtering, and inverse gamma 1/7 to emulate tonemapping. It’s not the same as HDR, but does the job. 🙂 This will also greatly amplify any of the filter problems/artifacts, which is what we want for the analysis – with approximations we often want to understand not just the average, but also the worst case.

Left: source image, middle: image blurred with a circular kernel and no gamma, right: the same image blurred with the same kernel, but in fake-HDR space to emphasize the filter shape.
def fake_tonemap(x):
  return np.power(np.clip(x, 0, 1), 1.0/7.0)
def fake_inv_tonemap(x):
  return np.power(np.clip(x, 0, 1), 7.0)
def filter(img, x):
  res = np.zeros_like(img)
  for c in range(3):
    # Note that we normalize the filter to preserve the image brightness.
    res[:,:,c] = fake_tonemap(scipy.ndimage.convolve(fake_inv_tonemap(img[:,:,c]), x/np.sum(x)))
  return res
def low_rank_filter(img, x, rank = 1):
  final_res = np.zeros_like(img)
  U, E, V = np.linalg.svd(x / np.sum(x))
  for c in range(3):
    tmd = fake_inv_tonemap(img[:,:,c])
    for i in range(rank):
      final_res[:,:,c] += E[i] * scipy.ndimage.convolve(scipy.ndimage.convolve(tmd, U[:, i, np.newaxis]), np.transpose(V[i,:, np.newaxis]))
    final_res[:,:,c] = fake_tonemap(final_res[:,:,c])
  return final_res

Without further ado, here are the results.

Unsurprisingly, for the difference of Gaussians and exponential distribution, two components are enough to get identical filtered result, or very close to.

For the hexagon and circle, even 4 components show some of the negative “ringing” artifacts (bottom of the image), and one needs 8 to get artifact-free approximation. At the same time, we are using extreme gamma to show those shapes and emphasize the artifacts. Would this matter in practice and would such approximations be useful? As usually, it depends on the context of the filter, image content, further processing etc. I will put together some thoughts / recommendations in the Conclusions section.

A careful reader might ask – why not clamp filters to not be negative? The reason is that low rank approximations will often locally “overshoot” and add more energy in certain regions, so the next rank approximation component needs to remove it. And since we do it separably and accumulate the filtered result (the whole point of low rank approximation!) we don’t know if the sum will be locally negative or not. We could try to find some different, guaranteed non-negative decomposition, but it’s a NP-hard problem and only some approximations / heuristics / optimization-based solutions exist. Note: I actually wrote a blog post on using optimization to solve (or at least minimize) those issues as a follow-up, be sure to check it out! 🙂

Limitation – only horizontal/vertical separability

Before I conclude my post, I wanted to touch two related topics – one being a disadvantage of the SVD-based decomposition, the other one being a beneficial side-effect. Starting with the disadvantage – low-rank approximation in this context is limited to filters represented as matrices and strictly horizontal-vertical separability. This is very limited, as some real, practical, and important applications are not covered by it.

Let’s analyze two use-cases – oriented, orthogonal separability, and non-orthogonal separability. Anisotropic Gaussians are trivially separable with two orthogonal filter – along the principal direction, and perpendicular to it. Similarly, skewed box blur filters are separable, but in this case we need to blur in two non-orthogonal directions. (Note: this separability is the basis of classic technique of fast hexagonal blur developed by John White and Colin Barre-Brisebois).

Such filtering can be done super-fast on the GPU, often almost as fast as the horizontal-vertical separation (depending if using texture sampler or compute shaders / local memory), yet the SVD factorization will not be able to find it… So let’s look at how the low rank horizontal-vertical approximations and accumulation of singular values look like.

From left to right: original, rank1, rank2, rank4, rank8.

Overall those approximations are pretty bad if you consider that they are trivially separable in those different direction. You could significantly improve them by finding the filter covariance principal components and rotate it to be axis aligned, but this is an approximation, requires filter resampling and complicates the whole process… Something to always keep in mind – as automatic linear algebra machinery (or any ML) is not a replacement for careful engineering and using the domain knowledge. 🙂

Bonus – filter denoising

My post is already a bit too long, so I will only hint at an interesting effect / advantage of SFV – how SVD is mostly robust to noise, and can be used for the filter denoising. If we take a pretty noisy Gaussian (Gaussian maximum of 1.0, noise standard deviation of 0.1), and do a simple rank1 decomposition, we get a significantly cleaner representation of it (albeit with horizontal and vertical “streaks”).

This approximation denoising efficacy will depend on the filter type; for example if we take our original circular filter (which is as we saw pretty difficult to approximate), the results are not going to be so great, as more components that would normally add more details, will add more noise back as well.

Approximations of noisy circular filter – original, rank1, rank2, rank4, rank8. The more components we add, the more noise comes back.

Therefore, there exists a point at which higher rank approximations will only add noise – as compared to adding back information/details. This is closely related to the bias-variance trade-off concept in machine learning, and we can see this behavior on the following plot showing normalized sum of squared error of the noisy circular filter next rank approximations (error is computed against ground truth, of non-noisy filter).

When we add next ranks of approximation to the noisy filter, we initially observe the error is decreasing, but then it starts to increase, as we don’t add more information and original structure, but only noise.

It is interesting to see on this plot that the cutoff point in this case is around rank 12-13 – while 14 components were enough to perfectly reconstruct the non-noisy version of the filter.

Left: clear circular filter, middle: circular filter with noise added, right: rank 12 approximation of the noisy circular filter – notice perfect circle shape while significantly less noise than the middle one!

Overall this property of SVD is used in some denoising algorithms (however decomposed matrices are not 2D patches of an image; instead they usually are MxN matrices where M is a flattened image patch, representing all pixels in it as a vector, and N corresponds to N different, often neighboring or similar image patches), which is a super interesting technique, but way outside of the topic of this post.


To conclude, in this blog post we have looked at image filter/kernel separability – its computational benefits, simple and analytical examples, and shown a way of analyzing if a filter can be converted into a separable (rank 1) representation through Singular Value Decomposition.

As we encountered some examples of non-separable filters, we analyzed what are the higher rank approximations, how we can compute them through sum of many separable filters, how do they look like, and if we can apply them for image filtering using an example of bokeh.

I love the scientific and analytical approach, but I’m also an engineer at heart, so I cannot leave such blog post without answering – is it practical? When would someone want to use it?

My take on that is – this is just another tool in your toolbox, but has some immediate, practical uses. Those include analyzing and understanding your data, checking if some non-trivial filters can be separable, if they can be approximated despite the noise / numerical imprecision, and finally – evaluate how good the approximation is.

For non-separable filters, low rank approximations where the rank is higher than one can often be pretty good with just a few components and I think they are practical and usable. For example, our rank 4 approximation of the bokeh filters was not perfect (artifacts), but computationally way cheaper than non-separable filter, it parallelizes perfectly/trivially, and is very simple implementation-wise. In the past I referenced a very smart solution from Olli Niemitalo to approximate circular bokeh using complex phasors – and low-rank approximation in the real domain covered in this blog post is some simpler interesting alternative. Similarly, it might not produce perfect hexagonal blur like the one constructed using multiple skewed box filters, but the cost would be similar, and the implementation is so much simpler that it’s worth giving it a try.

I hope that this post was inspiring for some future explorations, understanding of the problem of separability, and you will be able to find some interesting, practical use-cases!

Posted in Code / Graphics | Tagged , , , , , , , , , , , | 10 Comments

Analyze your own activity data using Google Takeout – music listening stats example

The goal of this post is to show how to download our own data stored and used by internet services to generate personalized stats / charts like below and will show step-by-step how to do it using colab, Python, pandas, and matplotlib.

Disclaimer: I wrote this post as a private person, not representing my employer in any way – and from a 100% outside perspective.

Intro / motivation

When we decide to use one particular internet service over the other, it is based on many factors and compromises – we might love our choices, but still find ourselves missing certain features.

Seeing “year in review” from Spotify shared by friends made me envious. Google Play Music that I personally use doesn’t have such a cool feature and didn’t give me an opportunity to ponder and reflect on the music I listened to in the past year (music is a very important part of my life).

Luckily, the internet is changing and users, governments, as well as internet giants are becoming more and more aware of what it means for a user to “own their data”, request it, and there is a need for increased transparency (and control/agency over data we generate while using those servies). Thanks to those changes (materializing themselves in improved company terms of service, and regulatory acts like GDPR or California Consumer Privacy Act), on most websites and services we can request or simply download the data that those collect to operate.

Google offers such a possibility in a very convenient form (just a few clicks) and I used my own music playing activity data together with Python, pyplot, and pandas to create a personalized “year in review” chart(s) and given how cool and powerful, yet easy it is, decided to share it in a short blog post.

This blog post comes with code in form of a colab and you should be able to directly analyze your own data there, but more importantly I hope it will inspire you to check out what kind of other user data your might have a creative use for!

Downloading the data

The first step is trivial – just visit

Gotcha no 1: Amount of different Google services is impressive (if not a bit overwhelming) and it took me a while to realize that the data I am looking for is not under “Google Play Music“, but under “My Activity”. Therefore I suggest to “Deselect all”, and then select “My Activity”. Under it, click on “All activity data included” and filter to just Google Play Music “My Activity” content:

The nextstep here would be to change the data format from HTML to just JSON – which is super easy to load and parse with any data analysis frameworks or environments.

Finally, proceed to download your data as “one-time archive” and a format that works well for you (I picked a zip archive).

I got a “scary” message that it can take up to a few days to get a download link to this archive, but then got it in less than a minute. YMMV, I suspect that it depends on how much data you are requesting at given time.

Downloaded data

Inside the downloaded archive, my data was in the following file:

Takeout\My Activity\Google Play Music\MyActivity.json

Having a quick look at this JSON file, looks like it contains all I need! For every activity/interaction with GPM, there is an entry with title of the activity (e.g. listened to, searched for), description (which is in fact artist name), time, as well as some metadata:

One thing that is interesting is that “subtitles” contain information about “activity” that GPM associated with given time, temperature, and weather outside. This kind of data was introduced between 2019-03-21 (is not included my entries on that date or before) and 2019-03-23 (the first entry with such information). The activity isn’t super accurate, most of them are “Leisure” for me, often even when I was at work or commuting, but some are quite interesting, e.g. it often got “Shopping” right. Anyway, it’s more of a curiosity and I haven’t found great use for it myself, but maybe you will. 🙂

Gotcha no 2: all times described there seem to be in GMT – at least for me. Therefore if you listen to music in any different timezone, you will have to convert it. I simplified my analysis to use pacific time. This results in some incorrect data – e.g. I spent a month this year in Europe and music that I listened to during that time will be marked incorrectly as during some “weird hours”.

Analyzing the data

Ok, finally it’s time for something more interesting – doing data analysis on the downloaded data. I am going to use Google Colaboratory, but feel free to use local Jupyter notebooks or even commandline Python.

Code I used is available here.

Note: this is the first time I used pandas, so probably some pandas experts and data scientists will cringe at my code, but hey, it gets the job done. 🙂

The first step to analyze a local file with a colab is to upload it to the runtime using the tab on the left of the colab window – see the screenshot below. I have uploaded just the MyActivity.json without any of the folder structure. If you use local scripts, this step is unnecessary. as you can specify full local path

Gotcha no 3: Note that uploading files doesn’t associate them with your notebook, account used to open colab or anything like that – just with current session in the current runtime, so anytime you restart your session, you will have to reupload the file(s).

Time to code! We will start with loading some Python libraries and set the style of plots to a bit prettier:

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt'seaborn-whitegrid')

Then we can read the JSON file as Pandas dataframe – I think of it as of table in SQL, which is probably oversimplification, but worked for my simple use-case. This step worked for me right away and required no data preprocessing:

df = pd.read_json('MyActivity.json')
Index(['header', 'title', 'subtitles', 'description', 'time', 'products',
                         title description                      time
0  Listened to Jynweythek Ylow  Aphex Twin  2019-12-31T23:33:55.865Z
1       Listened to Untitled 3  Aphex Twin  2019-12-31T23:26:23.465Z
2                Listened to 4  Aphex Twin  2019-12-31T23:22:46.829Z
3               Listened to #3  Aphex Twin  2019-12-31T23:15:02.427Z
4           Listened to Nanou2  Aphex Twin  2019-12-31T23:11:36.944Z
5  Listened to Alberto Balsalm  Aphex Twin  2019-12-31T23:06:25.999Z
6             Listened to Xtal  Aphex Twin  2019-12-31T23:01:32.538Z
7       Listened to Avril 14th  Aphex Twin  2019-12-31T22:59:27.321Z
8        Listened to Fingerbib  Aphex Twin  2019-12-31T22:55:37.856Z
9              Listened to #17  Aphex Twin  2019-12-31T22:53:32.866Z

This is great! Now a bit of clean up. We want to apply the following:

  • For easier further coding, rename the unhelpful “description” to “artist”,
  • Convert the string containing date and time to pandas datetime for easier operations and filtering,
  • Convert the timezone to the one you want to use as a reference – I used single one (US Pacific Time), but nothing prevents you from using different ones per different entries, e.g. based on your locations if you happen to have them e.g. extracted from other internet services,
  • Filter only “Listened to x/y/z” activities, as I interested in analyzing what I was listening to, and not e.g. searching for.
df.rename(columns = {'description':'artist'}, inplace=True)
df['time'] = df['time'].apply(pd.to_datetime)
df['time'] = df['time'].dt.tz_convert('America/Los_Angeles')
listened_activities = df.loc[df['title'].str.startswith('Listened to')]

Now we have a dataframe containing information about all of our listening history. We can do quite a lot with it, but I didn’t care for specific “tracks” (analyzing this could be useful to someone), so instead I went to a) filter only activities that happened in 2019, and b) group all events of listening to a single artist.

For the step b), let’s start with just annual totals. We group all activities and use size of each group as a new column (total play count).

listened_activities_2019 = listened_activities.loc[(listened_activities['time'] >= '2019-01-01') & (listened_activities['time'] < '2020-01-01')]
artist_totals = listened_activities_2019.groupby('Artist').size().reset_index(name='Play counts')
top = artist_totals.sort_values('Play counts', ascending=False).set_index('Artist')[0:18]
                       Play counts
The Chemical Brothers          151
Rancid                         144
AFI                             98
blink-182                       89
Show Me The Body                69
The Libertines                  68
Ladytron                        64
Flying Lotus                    64
Boards Of Canada                62
Operation Ivy                   62
808 State                       62
Punish Yourself                 61
Lone                            60
Mest                            58
Refused                         54
Aphex Twin                      54
Dorian Electra                  53
Martyn                          52

This is exactly what I needed. It might not be surprising (maybe a little bit, I expected to see some other artists there – but they were soon after – I am a “long tail” type of listener), but cool and useful nevertheless.

Now let’s plot it with matplotlib. I am not pasting all the code here as there is a bit of boilerplate to make plots pretties, so feel free to check the colab directly. Here is the resulting plot:

From this starting point, let your own creativity and imagination be your guide. Some ideas that I tried and had fun with:

  • hourly or weekly listening activity histograms,
  • monthly listening totals / histograms,
  • checking whether I tend to listen to different music in the evening versus the morning (I do! Very different genres to wake up vs unwind 🙂 ),
  • weekly time distributions.

All of those operations are very simple and mostly intuitive in Python + pandas – you can extract things like hour, day of the week, month etc. directly from the datetime column and then use simple combinations of conditional statements and/or groupings.

Some of those queries/ideas that I ran for my data are aggregated in a single plot below:


Downloading and analyzing my own data for Google Play Music was really easy and super fun. 🙂

I have to praise here all of the following:

  • User friendly and simple Google data / activity downloading process, and its data produced in a very readable/parseable format,
  • Google Colab being an immediate, interactive environment that makes it very easy to run code on any platform and share it easily,
  • Pandas as data analysis framework – I was skeptical and initially would have preferred something SQL-like, but it was really easy to learn for such a basic use case (with help of the documentation and Stackoverflow 🙂 ),
  • As always, how powerful numpy and matplotlib (despite its quite ugly looking defaults and tricks required to do simple things like centering entries on label “ticks”) are – but those are already kind of industry standards, so won’t add here more compliments.

I hope this blog post inspired you to download and tinker with your own activity data. Hopefully this will be not just fun, but also informative to understand what is stored – and you can even judge what do you think is really needed for those services to run well. The activity data I downloaded from Google Play Music looks like a reasonable, bare minimum that is useful to drive the recommendations.

To conclude, I think that anyone with even the most basic coding skills can have tons of fun analyzing their own activity data – combining information to create personally meaningful stats, exploring own trends and habits, extending functionalities, or adding (though only for yourself and offline) completely new features for commercial services written by large teams.

Posted in Code / Graphics | Tagged , , , , , , | Leave a comment

Local linear models and guided filtering – an alternative to bilateral filter


In this blog post I am going to describe an alternative tool for the graphics and image processing programmers’ toolbox – guided filtering.

Guided filtering is a really handy tool that I learned about from my coworkers, and I was surprised that it is not more popular, especially for real time applications.

As in many – if not most – of my blog posts, I am not going to describe original research of mine, but try to educate, explain some ideas conceived and published by others, focusing on practical application, their pros and cons – and hopefully inspire others.

Given the recent advances in real time rendering, and popularity of “stochastic” techniques – both screen-space (SSAO, screen-space reflections), as well as world-space like ray tracing, there will be more need for efficient, robust, detail preserving denoising of different image signal components.  Most commonly used (joint) bilateral filter is just one of them, but there are some as fast (if not faster!) alternatives and I am going to describe one of them.

This blog post comes with code in form of Google Colab and Python / numpy implementation. I find it really useful that anyone can just run it in their browser, start modifying it and playing with it. Highly recommend it over lonely repos that might not even compile! 🙂

Problem statement

I am going to use here a problem of screen-space ambient occlusion. It is an approximation of some of global illumination effects (visibility, shadowing of indirect illumination) using screen-space depth information. I will not go deeply into how it is computed (I have mentioned a robust, fast and simple technique in an older blog post of mine in the context of temporal techniques), but to meet performance constraints, this technique is computed:

  • Usually at lower resolution,
  • Using randomized, stochastic sampling patterns, producing noisy results.

Here is an example of raw, unfiltered SSAO output on a Crytek Sponza scene:

Because of small sample count, the results are very noisy, and there are only a few discrete levels of signal (color banding). This image is also half resolution (quarter pixels) as compared to the full image.

We could just blur it to get rid of the noise with a Gaussian filter, but the effect would be
 well, blurry:

The main problem with this blurriness – apart from loss of any kind of details – is that it doesn’t preserve the original geometric edges, and when applied to a real scene, would creating halos.

The most common solution to it is joint bilateral filtering.

(Joint) bilateral filter

Bilateral filter is image filter that varies sample weights not only based on image-space distance in pixels, but also the similarity between color samples.

In an equation form this could be written as:

y = Sum(w(x, xij) * xij) / Sum(w(x, xij)
w(x, xij) = wx(i,j) * wy(x, xij)

Where wx is spatial weight, and wy is signal similarity weight. Similarity weight can be any kind of kernel, but in practice the most commonly used one is Gaussian exp(-1/sigma^2*distance^2). Note the sigma parameter here – it is something that needs to be provided by the user – or generally tuned.

Bilateral filters are one of the cornerstones of image processing techniques, providing basis for denoising, detail enhancement, tonemapping etc.

Joint bilateral filter is a form of bilateral filter that uses one signal to create weights for filtering of another signal.

It can be used anytime when we have some robust, noise-free signal along the unreliable, noisy one and we expect them to be at least partially correlated. Example and one of the earliest applications is would be denoising low light images using a second picture of the same scene, taken with a flash.

Joint bilateral filtering can be written in an equation form as:

w(x, xij) = wx(i,j) * wy(z, zij)

Note that here the similarity weight ignores the filtered signal x, and uses a different signal z. I will call this signal a guide signal / guide image – even outside of the context of guided filtering.

In the case of SSAO, we can simply use the scene linear depth (distance of a rendered pixel from the camera plane).

It is most commonly used because it comes for “free” in a modern renderer, is noise-free, clearly separates different objects, doesn’t contain details (like textures) that we wouldn’t want to correlate with the filtered signal – is piece-wise linear.

On the other hand, nothing really prevents us from using any other guide signal / information – like albedo, lighting color, geometric normals etc. Using geometric or detail normals is very often used for filtering of the screen space reflections – as reflections are surface orientation dependent and we wouldn’t want to blur different reflected objects. But in this post I will focus on the simplest case and keep using just the depth.

Let’s have a look at simple joint bilateral filter using depth:

Clearly, this is so much better! All the sharp edges are preserved, and the efficacy of denoising seems quite similar.

Bilateral upsampling

Ok, so we got the half resolution signal quite nicely filtered. But it is half resolution
 If we just bilinearly upsample it, the image will just become blurrier.

If we zoom in, we can see it in more details:

Left: 2x point upsampling, Right: 2x bilinear upsampling.

In practice, this results in ugly half-res edges and artifacts and small dark/bright jaggy aliased “halos” around objects.

Luckily, the joint bilateral filtering framework provides us with one solution to this problem.

Remember how joint bilateral filter takes data term into account when computing the weights? We can do exactly the same for the upsampling. (edit: a second publication also from Microsoft research covered this idea the same year!

When upsampling the image, multiply the spatial interpolation weights by data similarity term (between low resolution guide image and the high resolution guide image), renormalize them, and add weighted contributions.

This results in much better image sharpness, edge preservation and other than the computational cost, there are no reasons to not do it.

Left: 2x point upsampling, Middle: 2x bilinear upsampling, Right: 2x joint bilateral upsampling.

I highly recommend this presentation about how to make it practical for things like SSAO, but also other applications that don’t require any bilateral denoising/smoothing, like particle and transparency rendering.

As a side note – performing such bilateral upscaling N times – if we have N different effects applied in different places – might be costly at full resolution. But you can use the bilinear sampler and precompute UV offsets once, ahead of time to approximate bilateral upsampling. I mentioned this idea in an older blog post of mine.

To summarize the described workflow so far:

  1. Create a low resolution representation of some “robust” signal like depth.
  2. Compute low resolution representation of a noisy signal.
  3. Perform joint bilateral filtering passes between the noisy signal and the clean low resolution signal.
  4. Use a low and high resolution representation of the clean signal for joint bilateral upsampling.

In theory, we could do joint bilateral filtering of the low resolution signal directly at higher resolution, but in practice it is not very common – mostly because of the performance cost and option to use separable filtering directly in lower resolution.

Problems with bilateral filtering

Let me get this straight – bilateral filtering is great, works robustly, and it is the recommended approach for many problems. 

But at the same time, as every technique it comes with some shortcomings and it is worth considering alternatives, so I am going to list some problems.

Lack of separability

One of the caveats of the bilateral filter is that following its definition, it is clearly not separable – as weights of each filter sample, even off the x and y axis depend on the similarity with the filter center.

Unlike Gaussian, a pass over x dimension followed by a pass over the y dimension (to reduce the number of samples from N^2 to N) is not mathematically equivalent
 but in practice this is what we do most of the time in real time graphics.

The results are almost indistinguishable:

As expected, there have to be some artifacts:

Left: Full NxN bilateral filter. Right: Separable N+N bilateral filter.

Notice the banding on the top of the image, as well as streaks around the lion’s head.

Still, it’s significantly faster and in practice the presence of artifacts is not considered a showstopper. 

I will just briefly mention an alternative to slightly reduce those artifacts – to run the separable filter N times – so iterating over the image x, y, x, y etc. But below is a diagram of clean signal pattern where it will not work / improve things anyway.

Example pattern of clean signal that separable joint bilateral filtering will ignore completely. Neither horizontal, nor vertical pass can “see” the elements on the diagonal, while a non-separable version would read them directly.

Edit after a comment: Please note that there are many other different ways of accelerating the bilateral filter. They usually involve subsampling in space, either in one sparse pass, or in multiple progressively more dilated ones.

Tuning sensitivity / scale dependence

Bilateral filtering has a necessary tuning parameter “sigma”. This makes it inherently scale dependent. Areas where guide signal difference is multiple sigmas won’t get almost any smoothing, while the areas with larger difference will get over-smoothed. I will show this problem with an example of 1D signal.

Top: clean 1D guide signal, Bottom: Noisy 1D signal to filter.

We need to pick some sigma, and because of the signal variations here, no choice is going to be perfect. Too large sigma is going to cause over-blurring of the small signal (and rounding of the corners), while too small sigma will under-filter some areas.

Top: Too small sigma causes under-filtering of the signal and some staircase/gradient reversal artifacts, bottom: too large sigma over-blurs the corners, and over-smooths the smaller scale details.

Staircase / piecewise constant behavior / splotches

While bilateral filtering behaves perfectly when the guide/clean signal is piece-wise constant, it can produce undesired look when the clean signal is a gradient / linear ramp – or generally piece-wise linear. In such case it will either produce piece-wise-constant look (very often referred to “cartoony”), gradient reversals, or staircase type of artifacts.

Top: clean signal, middle: noisy signal, bottom: joint bilateral filtered signal – notice some local gradient reversals / staircase artifacts.

Performance cost

Finally, the bilateral filter is not that cheap. Each sample weight requires computation of the kernel function (most often exponential), and there is a need for normalization at the end.

Doing bilateral denoising then followed by bilateral upsampling while saves some computations by doing most of the wide-radius filtering work in low resolution, but still shows some redundancy as well – in the end we are computing bilateral weights and renormalizing twice.

Guided filter – local linear models

Some of those shortcomings are solved by the guided filter. While I as always recommend the original paper – and its idea is brilliant, I had some trouble parsing it (IMO the follow-up, fast guided filter is easier to understand), so will describe it with my own words. 

The main idea is simple: given some local samples, find a best-fit linear relationship between the guide and filtered signal. “Best-fit” is vague, so we will use the linear least squares method – both because it has the optimal behavior (maximum likelihood estimator) in the case of normally distributed data, but also because of it having a simple, closed form and computational efficiency.

This is effectively solving a linear regression problem “what is the linear equation that describes relationship in the data with the smallest squared error”.

Before delving into how to do it, let’s have a look at a few examples of why it might be a good idea. If there is no local variation of the guide signal, linear model will revert to just averaging – so will work correctly in the case of the piece-wise constant signals.

At the same time, one of the main and immediate motivations is that such method will perform perfectly in the case of piece-wise linear signal (as opposed to piece-wise constant of the bilateral filter). So in the case of SSAO – it should give better results anytime you have a depth gradient correlated with change in the filtered signal intensity. 

Similarly, it is in a way a machine learning technique and it should discover linear relationships no matter what is the scale of them – it will automatically infer the scaling parameter from the data.

Let’s see it on the two examples from the previous section – shortcomings of the bilateral filter.

Top: noisy signal, middle: joint bilateral filtered signal – notice some local gradient reversals / staircase artifacts, bottom: same signal filtered with a guided filter. Notice the lack of problematic artifacts.

Top: Bilateral filter with too small sigma, middle: bilateral filter with too large sigma, bottom: parameter-free guided filter.

“There is no such thing as a free lunch”, so there are some shortcomings, but I will cover them in a later section. Now, knowing the motivation behind the local linear regression, let’s have a look at how we can do it.

How to compute linear regression efficiently

How to solve linear least squares is one of the most common entry-level position machine learning interview questions. This is so widely described topic that it doesn’t make sense for me to rederive here the normal equations (Inverse(X*X^T)*X^T*b) using the matrix notation form. Instead, I will just point out to the univariate linear regression wikipedia article, that shows how to solve it easily with simple terms like covariance, variance, and mean.

The beauty of it is that you can do it just by accumulating raw (not even centered!) moments. This is not the most numerically robust way to do it, but it is extremely efficient. In pseudo-code, it would be something like:

X = Sum(x)

Y = Sum(y)

X2 = Sum(x*x)

XY = Sum(x*y)

A = (XY - X * Y) / (X2-X*X + reg_constant)

B = Y - beta * X

This is so simple and beautiful – we are looking at just sums, fused multiply-adds and a single division! The sums are not only completely separable, but also can be computed very efficiently using SAT/integral images.

We can also do a weighted linear least squares – and just multiply the moments by weights when accumulating them. I will use Gaussian weights in my examples due to reverting to pleasantly-looking (and not very alias prone) Gaussian filter in the case of constant signals.

Note the reg_constant here. In practice for perfectly flat signals you have to add some regularization to avoid divisions by zero or NaNs (or simply answering an ill-posed question “what is the linear relationship between a signal that doesn’t change at all and a different one?”). Such Tikhonov regularization can also make the model “smoother” and reverting more and more to local mean (like regular smoothing). You can observe the effect in the following animation – as regularization increases, the result looks more and more like a simply blurred signal.

If you use a separable kernel like Gaussian for weighting of the accumulated moments, this means that it can still be done in a separable way. While storing 4 values from 2 signals might seem sub-optimal and increase the memory bandwidth usage (especially those moments as squared variables are precision-hungry…) or the register pressure, total linearity of moment accumulation is suitable for optimizations like using local / group shared memory/cache and avoiding going back to the main memory completely. It can be done pretty efficiently with compute shaders / OpenCL / CUDA.


How does it work in practice? Here is the result of filtering of our sponza SSAO:

And by comparison again bilateral filter:

The results are pretty similar! But there are some subtle (and less subtle) differences – both solutions having their quality pros and cons. 

For better illustration, I include also a version that toggles back and forth between them. Guided filter is the slightly smoother one.

I will describe pros and cons later, but for now let’s have a look at another pretty cool advantage of local linear models.

Upsampling linear models

After we have done all the computations of finding a local linear regression model that describes the relationship between the filtered and guide signal, we can use this model at any scale / resolution.

We can simply interpolate the model coefficients, and apply to a higher (or even lower) resolution clean / guide signal. It can be used to greatly accelerate the image filtering.

This way, there is no need for any additional bilateral upsampling.

In full resolution, just fetch the interpolated model coefficients – like using hardware bilinear sampling – and apply them with a single fused multiply-add operation. Really fast and simple. The quality and the results achieved such way are excellent.

Top: Joint bilateral upsampling of guide filtered image, bottom: bilinearly interpolated local linear model.

While those results are very similar, there are two quality advantages of the linear model upsampling – it can at the same time preserve some details better than joint bilateral upsampling, as well as produce less bilinear artifacts. Both of them come from the fact that it is parameter-free, and discovers the (linear) relationships automatically.

Left: joint bilateral upsampling of the results of the guided filter, right: upsampling of the local linear model coefficients. Notice that both the details are sharper, as well as have less of bilinear artifacts.

On the other hand, in this case there are two tiny 2 pixel-wide artifacts introduced by it – look closely at the right side of the image, curtains and a pillar. What can cause such artifacts? If the low resolution representation lacks some of the values (e.g. a single pixel hole), linear model that is fitted over a smaller range and then extrapolated to those values will most likely be wrong

The linear model upsampling is so powerful that it can lead to many more optimizations of image processing. This post is already very long, so I am only going to reference here two papers (one and two) that cover it well and hopefully will inspire you to experiment with this idea.

Pros and cons

Pro – amazingly simple and efficient to compute

Pretty self-explanatory. In the case of two variables (input-output), local linear regression is trivial to compute, uses only the most basic operations – multiply-adds and can be used with separable convolutions / summations or even SATs.

Pro – works at multiple image scales

The second pro is that local linear regression almost immediately extends to multi-resolution processing with almost arbitrary scaling factor. I have seen it successfully used by my colleagues with resolution compression factors higher than 2x, e.g. 4x or even 8x.

Pro – better detail preservation at small detail scales

Local linear models are relatively parameter-free, and therefore can discover (linear) relationships no matter the guide signal scale. 

Here is an example of this effect:

Left: joint bilateral filter, right: guided filter. 

Notice not only much cleaner image, but better preservation of AO that increases with depth around the top right of the image. Joint bilateral filter is unable to discover and “learn” such relationship. On the other hand, details are gone around the left-bottom part of the image, and I will explain the reason why in a below section called “smudging”.

Pro – no piecewise constant artifacts

I have mentioned it before, but piecewise constant behavior, “chunkiness”, gradient reversal etc. are all quite objectionable perceptual effects of the bilateral filter. Local linear regression fixes those – resulting in a more natural, less processed looking image.

Pro or con – smoother results

Before going into the elephant in the room of over blurring and smudging (covered in the next point), even correct and well behaved local linear models can produce smoother results.

Here is an example:

Left: joint bilateral filter, right: guided filter without any regularization.

Notice how it can be both great (shadow underneath the curtain), as well as bad – overblurred curtain shape.

Why does such oversmoothing happen? Let’s look at a different part of the image.

I have marked with a red line a line along which the depth increases just linearly. Along this line, our signal goes up, and down in oscillatory patterns. What happens if we try to fit a single line to something like this? The answer is simple – oscillations along this line will be treated as noise and smoothed no linear relationship will be discovered, so the linear model will revert to local mean.

Con – “smudging”

This is what I personally find the biggest problem with the guided filter / local linear models and limitation of a linear-only relationship. Notice the blurry kind of halo around here:

Left: joint bilateral filter, right: guided filter.

Why is it happening only around those areas? I think about it this way – a significant discontinuity of signal and a large difference “dominates” the small-scale linear relationships around it.

I have compared here two signals:

Left: flat signal followed by a small signal variation can be fit properly when there are no discontinuities or conflicting gradients, right: presence of signal discontinuity causes the line fitting to completely ignore the small “ramp” on the right.

Overall this is a problem as human visual system is very sensitive to any kind of discontinuities and details suddenly disappearing around objects look like very visible halos

This reminds me of the variance shadow mapping halos/leaking.

Would fitting a second order polynomial, so tiny parabolas help here (another shadow mapping analogy – like in moment shadow mapping)? The answer is yes! It is beyond the scope of this blog post, but if you are interested in this extension, be sure to read till the last section of this blog post. 🙂  

Con – potential to create artifacts

I have mentioned above that using bilateral upsampling can cause some artifacts like:

Those are less common and way less objectionable than jagged edges and broken-looking bilinear interpolation, but getting rid of those might be very hard without either too strong / too over-smoothing regularization, or some tricks that go well beyond the scope of a simple linear regression (additional data weighting, input clamping, stochastization of the input etc.).

Con – regression of multiple variables more costly

Unlike the joint bilateral filter where weight is computed once for all of the channels, if you want to filter an N channel signal using a single channel guide, the computation and storage cost goes up N times! This is because we have to find local linear model that describes the relationship between the guide signal and each of the channels separately
 In the case of RGB images this might be still practical and acceptable (probably not for multi-spectral imaging though 🙂 ), but it might be better to do it only on e.g. luma and upsample and process the chroma with some different technique(s)…

Con – memory requirements

The local storage memory costs of regression are at least twice as large as the bilateral filter for the number of channels. Unfortunately, those are also “hungry” for precision (because of using squared raw moments) and you might have to use twice more bit depth than for your normal filtered and guide signals.

Extending it and taking it further

I hope that after this post I have left you inspired to try – and experiment with – a slightly different tool for various different image processing operations – filtering, denoising, upsampling, mixed resolution processing. It is trivial to prototype and play with, but it goes beyond simple linear correlation of two variables. 

In fact, one can extend it and use: 

  • Any order polynomial least squares regression, including modelling of quadratic, cubic, or any higher order functions,
  • Multivariate modelling, including regression of the output from multiple guide signals.

Both of those require going back to the least squares framework, solving the normal equations, but can still be solved via accumulation of RAW moments of those signals to construct a larger covariance matrix.

Here is a quick, animated comparison in which I added depth squared for the regression.

A sequence of joint bilateral filter, first, and the second order least squares regression. Second order linear regression (parabola fitting) improves the results quite drastically
 but obviously at additional performance cost (7 moments accumulated instead of 4) and some more noise preservation.

Using more input signals comes with an additional cost, but does it always guarantee increased quality? There are two answers to this question. First one – “kind of”, as if there was no linear relationship with this additional variable and no correlation, it should be simply ignored. The second one is “yeah in terms of absolute error, but kind of not, when it comes to perceptual error”.

The problem is that more complex models can “overfit”. This is known in ML as bias-variance trade-off and more variance = more noise remaining after the filtering. We can see it in my toy example – there is clearly somewhat more noise on the left column.

As an example of a paper that uses multivariate regression, is this recent work from Siggraph 2019. It is work solving directly the problem of rendering denoising, and they use multiple input signals like normals, world space positions, world space positions squared etc. They use a lot of tricks to make linear models work over non-overlapping tiles (sparse linear models) – I personally found it very inspiring and highly recommend it!

I will conclude my post here, thanks for reading such a long post – and happy adventures in image filtering!

PS. Be sure to check out and play with Colab code that comes with this blog post.


Scalable Ambient Obscurance, Morgan McGuire, Michael Mara, David Luebke

Flash Photography Enhancement via Intrinsic Relighting, Elmar Eisemann, Freedo Durand

Joint Bilateral Upsampling, Johannes Kopf, Michael F. Cohen, Dani Lischinski, Matt Uyttendaele

Image-Based Proxy Accumulation for Real-Time Soft Global Illumination, Peter-Pike Sloan, Naga K. Govindaraju, Derek Nowrouzezahrai, John Snyder

Mixed Resolution Rendering in Skylanders: Superchargers, Padraic Hennessy 

A Low-Memory, Straightforward and Fast Bilateral Filter Through Subsampling in Spatial Domain, Francesco Banterle, Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno

Edge-Avoiding À-Trous Wavelet Transform for fast Global Illumination Filtering, Holger Dammertz, Daniel Sewtz, Johannes Hanika, Hendrik P.A. Lensch

Guided Image Filtering, Kaiming He

Fast Guided Filter, Kaiming He, Jian Sun

Deep Bilateral Learning, Michaël Gharbi, Jiawen Chen, Jonathan T. Barron, Samuel W. Hasinoff, Frédo Durand

Bilateral Guided Upsampling, Jiawen Chen, Andrew Adams, Neal Wadhwa, Samuel W. Hasinoff

Variance Shadow Maps, William Donnelly, Andrew Lauritzen

Moment Shadow Mapping, Christoph Peters, Reinhard Klein 

Blockwise Multi-Order Feature Regression for Real-Time Path Tracing Reconstruction, Koskela M., Immonen K., MÀkitalo M., Foi A., Viitanen T., JÀÀskelÀinen P., Kultala H., and Takala J.

Posted in Code / Graphics | Tagged , , , , , , , , | 4 Comments

How (not) to test graphics algorithms


Siggraph 2019 is sadly over, but as always I came back super inspired and grateful for meeting many friends. 

Conferences are mostly not about seeing the presentations – but about all the interesting and inspiring discussions, and one of such casual lunch-time chats lead to me writing this blog post.

We chatted quite a lot about how to test automatically graphics features and while I am happy that the games industry starts to introduce various forms of testing, I believe that (in the games industry) many (if not most) people testing graphics features do it wrong.

This is a bold opinion, so I am going to elaborate on it, but first I will share my personal experience.

Personal backstory – frustrated by tests

My story with using tests is not glorious. 

Over my career switching workplaces, I went from teams using zero tests at all (not even trivial build tests! my first game gig binary builds were initially checked in to VCS by a programmer “from time to time”), through simple pre- and post-submit build tests, to full testing culture, but for most of my career and at most of those places, the code itself – the functionality – was not tested.

I encountered some form of “functional” tests quite late in my career – don’t want to mention here the company / team name, as it is not relevant, and I am sure a lot has changed since then. The important thing is – the amount of tests was orders of magnitude more than I have seen before. 

Some of the tests were very reasonable – for example smoke tests on code and content build and some of the actual game levels. They were catching lots of real issues, both in the code, as well as in the checked-in data.

On the other hand, as a graphics engineer, I quickly discovered that I have to deal with numerous “visual tests” for graphics features and they were source of some frustrations and tensions.

The testing framework itself and infrastructure were pretty good, but the team practices and processes around testing graphics features were far from useful – at least to me and in my experience.

They were essentially golden tests for each feature, all testing for exact output image rendered by the game engine. I will describe the basic workflow to test some new features in a new section, naming it Approach A.

It was quite an extreme process, but talking with colleagues from other companies, I learned that many developers use at least partially similar testing strategies!

That past experience caused the immature me to literally hate all kinds of tests (allergic Pavlovian reaction), but after some time and working with teams somewhat more experienced in terms of engineering practices and “code health culture”, I think I got to love well designed tests and I think I can dissect one by one how Approach A could have been improved.

In the next sections, I am going to compare the Approach A with an Approach B – an example of a how a graphics feature could be tested, and then by comparing both approaches analyze what distinguishes a “good” and a “bad” test process for graphics and imaging programmers.

How testing shouldn’t be done – Approach A

  1. Tech artist or a programmer creates a new “level”, placing meshes and materials (that use the new feature) with the main game editing tool.
  2. Virtual camera is placed in the world and a script is set up to take a screenshot after some time. Optionally, the script would toggle some rendering features on / off.
  3. The test is submitted in the data repository (not the code repository!) since it has some data dependencies.
  4. Screenshot (golden file) is stored in some database.
  5. During the testing itself, the screenshot is compared (hash, exact equality) with the output – any single difference = test failure.
  6. Some time later, someone changes something and suddenly, test output changes a few of the pixel values = failure. You can see both the hash difference, as well as gold and test + diff images.
  7. Any test failure after submission = submits are blocked for the whole team, until the tests are “fixed” or “blessed” (golden file updated).

Basically every single point on that list was either ineffective, or simply frustrating.

At least once a week I had to go through +/- thousand of screenshots and “bless” them under peer pressure (I have just blocked other colleagues from submitting) – overall very stressful experience, magnified by other artifacts of the submit/integration process like waiting in line for integration to happen, getting kicked out of the queue on broken tests etc.

At the same time, those tests were not catching many serious, real problems – like some important part of lighting being not normalized, NaNs appearing in BRDF at gloss values of 1, etc.

How it could be done – Approach B

Let’s say you have written a new awesome feature – subsurface skin scattering shader, requested so much by character artists. Both them and you are happy with the prototype results on a test model and you could click submit
 But instead you decide to improve the code a bit, modularize it, and write some tests.

You think about steps of your algorithm, and decide to test some following properties:

  • Does the skin shader work – does it actually diffuse the incoming light?
  • Does the skin shader preserve energy conservation (no matter what is the scatter profile, there should never be energy added)?
  • Does the skin shader respect specified diffusion profile?
  • Does the skin shader produce “reasonable” values – never negative, NaNs, inf.
  • Is the diffusion rotation invariant / isotropic?
  • Are the artist-authored material properties properly packed?
  • Does the diffusion stop at significant depth discontinuities? 
  • How much performance cost does it add?

Ok, knowing +/- what you want to test (this is all just an example), you finally decide to write some tests!

  1. You create a simple C++ (or the language of your choice) file along your code and add it to your build system with some “decoration” that it should be compiled as a part of an existing or a new test suite.
  2. In this file, you add simple macro decorated functors / classes that test behaviors one at a time.
  3. For every behavior, you create synthetic, procedural input from your code.
  4. For every behavior, you verify the output procedurally from your code. Apart from checking the actual behavior, you call some helper function e.g. ValidateOutput that checks for NaNs, inf, negative values.
  5. On test failures, you add code printing as much information as possible – expected vs actual, histogram of differences if checking multiple values, maybe additionally produced image disk dump.
  6. You write a (micro)benchmark that times the newly added pass depending on different sizes of inputs, different proportion of pixels with the feature enabled / disabled etc.

The points 3 and 4 are absolutely crucial and very different from the Approach A, and not very commonly used among colleagues I talked with.

I will make this example more concrete now – let’s say that you wanted to check for energy conservation of the diffusion process.

You would create a synthetic lighting buffer (and potentially a synthetic GBuffer if necessary) by filling it on CPU with zeros and a single pixel in the middle that would be “1” (or any reference value). You call your normal skin code pass, and then fetch the results to the CPU. On the CPU, you analyze the output programmatically – e.g. numerically integrate the pixel values. Such a test could have a few sub cases – testing unit response (no actual diffusion), perfectly flat box-filter like diffusion, and a Gaussian-like profile. Again, the most important part of this workflow is having very deterministic, extremely simple, and procedural inputs/outputs that you can reason about.

Important note: analyzing outputs numerically doesn’t mean that you can’t save the intermediate / final outputs for debugging and inspect them visually as well. On the contrary, I highly encourage having such option! Cases with all zeros or all infinity might be easy to see in a debug printout, but often visual inspection can provide insights on more non-obvious problem patterns (hey, why is this checkerboard pattern there? did I skip some samples?). Furthermore, visual verification of the test inputs / outputs when writing them can verify it they “make sense” and are representative of the expected outcome.

What is the difference between both approaches?

Let’s now dissect differences between the two approaches.

Tests (not) verifying correctness

What is the purpose of a test? There are many, but the most straightforward one is verifying correctness. If your input is some data, and it “looks kind of ok”, is this verifying the correctness?

If you want to test a skin shader, it might seem like a good idea to test it on a mesh of a head, but in such a setting you cannot verify any of the systems design or implementation assumptions  – just whether it “looks right”. This might be important for the user to verify whether the feature is what they asked for, but is useless for other engineers who will be looking at changed results of such tests in the future.

Are you sure that your Gaussian blur is not brightening the image? That its sigma is correct? Will the lighting shader work well with parameters at the end of the input range? Is your subsurface scattering shader really applying the requested diffusion profile? Is data after GBuffer packing/unpacking within the theoretical quantization precision across the range? None of those questions can be answered by eyeballing the output. 

Easiness/difficulty adding new tests

In this category, the problem with the Approach A was obvious – adding a new test involved multiple steps and workflows that were not typical programmer workflow. Launching the editing tool, setting up some geometry, setting up scripts, adding new golden data
 Quite a lot and very discouraging if you want to provide. My personal no1 rule of creating a healthy team culture is to make sure that valued and desired behaviors are “easy” to do (in a perfect world would be easier than the undesired ones). Having to jumping through many hoops to create a simple test doesn’t encourage testing culture.

Adding a new test that would be executed automatically should be just a few lines of code – and it is possible if you have testing process set up like in Approach B.

Tests close to / separated from the code tested

If tests “live far away” from the code tested like in the Approach A, it is hard to correlate one with another. If you are refactoring some feature and need to verify if tests that changed were actually supposed to change or not, it destroys your confidence that should come from using tests…

Furthermore, I believe that tests can serve as a supplemental “documentation” (just like well named functions and variables, well commented code etc) – and I often rely on them to see how a piece of code might be used, what are the assumptions etc. With the Approach B you can open the tests file and learn about the potential use-cases and assumptions that the code author has made.

If tests are completely separated and just test functionality, this advantage is also completely gone
 I might even find and open test scene, but not necessarily know what was set up there and how!

Testing on synthetic vs authored (and binary) data

Tests should be as simple, “atomic” and as isolated as possible (unless you want to do specifically integration tests). Relying on some arbitrary, authored and ad hoc data makes it very difficult to analyze / debug the test and the desired results – see “verifying correctness” above. 

A second problem is that now all your testing relies on your data processing pipelines. If you change your data pipelines even slightly (let’s say introduce subtle compression, quantization or anything), all your tests are going to change! This leads us into the next section…

Unit testing / testing end-to-end

Relying on e.g. data processing pipelines in all of your tests makes reasoning about safety of changes (one of the points of testing) impossible – you will see hundreds of tests changed their value, but among this noise might miss some real problem.

Notice how in the Approach B any changes in your content pipeline will not cause unexpected visual changes.

Testing end-to-end like in the Approach A relies on tens of different systems
 Content build system (previous point), material system, lighting system, camera system, mesh rendering, post processing, even gamma correction! Once after changing from regular z buffer to inverse z, I had to “bless” all the tests – not only unnecessary, but dangerous (I could have missed some legit regression). This is lots of moving pieces, and makes it impossible to correlate simple inputs to the output value. If it changes (suddenly test becomes broken by an “innocent” change) – good luck debugging where it comes from!

(Not) Understanding the breakages

Ok, your input got broken
 Why is that? Even ignoring the above (“testing end-to-end”), are those single pixel differences caused by “quantization noise”, or an inf/nan? Oh, the energy conservation broke – do we now have too much, not enough, or simply wrongly distributed outputs?

Having numerical analysis, histograms of differences, or simply asserts in tests (“assert that the output is always below 1”) like in the Approach B would immediately answer at least some of those questions.

(Lack of) tests documentation

Tests themselves should be documented and commented if possible. I find it much easier to do it through code comments and meaningful function naming (trivial example – e.g. VerifyNoNaNs called from within a test) than through some metadata attached to the test itself and the scene.

Test updates separated from changes / code CLs

Ok, let’s say that you have refactored some system and expect decreased/increased precision of some stages. In the case of Approach A you would submit your CL, and then update the goldens values. In the case of Approach B, you can put it in the same CL (again, change and test relative “locality”), and specifically reason about the changes “ok, I have lowered the precision of quantization by 2 bits, so I expect to change my test epsilons by no more than 4x”.

Relying on GPUs and floating point operations for exact comparisons

This one is a tough one and I don’t have a great answer for.

Doing graphics and work on GPUs, we want to test floating point operations, as well as catch some e.g. driver regressions. 

On the other hand, float point operations are flaky, can depend on the target platform (e.g. presence of SSE vs AVX), some runtime environment flags that change floating point behavior, or a driver version.

I personally think that having a hybrid tests that do all the input creation, packing, output fetching and analysis on the CPU, but execute the actual graphics production code on the GPU is a good middle ground, but as I said – it’s quite tough point, and every approach I have tried had its pros and cons.

If you suffer from lots of noise from driver / testing device changes, (and are sure that your tests are designed in a good way) then consider using a WARP device for DirectX, or excellent SwiftShader for OpenGL.

Test speed

This point might be too much of an implementation detail, so I will keep it short – but in the Approach B tests have almost no dependencies, are extremely minimal and execute in literally milliseconds. Fast tests encourage adding more tests, and testing often during the coding process itself.

When and how to test graphics?

Having described general ideas regarding how features can be tested, one might ask – when it is worth doing it? 

First use case – that I +/- already described here – is testing features when their interface and functionality are more or less defined, and most of the code is written. I want to emphasize that you are not limited to just the simplest single pass and image inputs / outputs.

Nothing prevents you from creating procedurally a simple scene with some e.g. decals, and verify if they get rendered correctly, and all the stencil buffer logic works (every game engine that I worked on and that used stencil buffer – it got broken on some platform for some feature during an unrelated refactor / optimization).

The second use of tests is to guide you and help you when writing the code.

While I think the whole concept of TDD is a classic over-complicated snake oil, it is often worth writing a test for the functionality you are about to add / in the process of adding. For example, writing GBuffer bit packing having a test that verifies that 0 maps to 0, 1 maps to 1 and 0.5 maps to 0.5 can help save you a lot of time. I cannot count instances of bugs when an engine had 127/128 or 255/256 instead of 1.0 because of wrong packing logic. 🙂 

Similarly you can write tests during feature development for any functionality from high level – like material blending, through mid level (what is the z buffer precision? Count all the discrete z values in the range 100m-101m and you have an immediate metric estimating z-fighting in that range!), to low level – I cannot imagine writing fixed-point math code without some hard check tests for under/overflows and verification of rounding.

Third use case that I highly encourage that is testing for performance – (micro)benchmarking. If you set up inputs procedurally, you can track the exact performance / timing of a given feature in isolation. Setting up the inputs/outputs procedurally allows you to control it very precisely and avoid inherent noisiness (and data dependence) of testing of the real scenes. Such benchmark can be used in the optimization process itself (especially with shader/code hot-reloading), but more importantly to track any performance changes and regressions. You want to track, locate, (and stop) any functionality regressions – why would you not want to do the same for performance? 🙂 Tracking it over time and having logs for many months can also help to analyze trends like immediately not obvious regression creep (death by a thousand paper cuts). Or conversely, you might immediately see a performance improvement from a new driver / compiler update – as a technical owner of a feature / system / technology, you should be aware of all of the changes, including the positive ones.

Offtopic – sanitizers and fuzzing

When talking about testing, I couldn’t resist myself from dedicating a tiny section and not mention here two techniques that are simply amazing when it comes to ROI – sanitizers, and fuzz testing.

They are especially effective when used together (given that sanitizers can slow down code to the point when manual testing is not possible/pleasant
), and will catch lots of real problems almost immediately.

I promise that TSan will trigger almost any time you introduce some new parallel code, and that it won’t be a false positive. 🙂 

Summary / caveats

Testing is not “scary” and should never be a burden and a struggle. After all, it is a tool for you – engineer, your future self, and your colleagues.

If testing is a burden, revisit your workflows and processes and figure out which parts can be improved!

I have compared two approaches of testing specific graphics features – Approach A, which is a real workflow that I used to work with, and Approach B, which is how I personally suggest approaching it.

I see not many virtues of the end-to-end / screenshot based approach in the case of testing features, however for the completeness and to be entirely fair, I see some good use cases for golden testing.

One is to have a simple smoke test and watch out for random breakages “somewhere in the pipeline”, when not anticipated and from unexpected sources. The second one is that it does provide a form of integration testing, testing the interaction of multiple systems. The third one is a bit paradoxical – the inherent flakiness and sensitivity of such tests makes then a good candidate to catch some unexpected compiler / toolchain / driver changes.

To combine those use-cases and some real advantages of golden tests, and not get into the problems / frustrations, I would suggest to have – a few (no more than 10-15!) golden “smoke tests”, with lots of features stuck into one scene, and testing the pipeline end-to-end. Expect that they might get changed pretty often, but be very thorough when investigating the differences (easier when four images change, than a thousand
). Finally, use programmatic tools like simple histograms, printouts and design the process of debugging the changes as well.

Finally – and the reason why I wrote this blog post – I hope that your adventures with finding the right testing strategy for you, and the potential productivity boost that comes from good tests will be easier to get for you than they were for me. 🙂

Posted in Code / Graphics | Tagged , , , , , , , | 4 Comments

Checkerboard rendering, rotated anti-aliasing and grid frequencies

This post is inspired by three interesting statements or questions that I heard and are interconnected by non-obvious theory:

First was regarding so called checkerboard rendering (example1 example2) – what is advantage of using it? After all it’s the same as rotating your grid by 45 degrees? Presentations on it claim better PSNR, but why?

Second one was regarding reconstruction filters. Why all classic interpolation kernels “separable”, applied separately on x and y sequentially and not being rotationally symmetrical?

Finally third one is something I often hear as a question – why are MSAA patterns like a 4x quad rotated diagonally?

I am going to try to introduce some theory first and show how it all relates to representable frequencies and aliasing. This post is accompanied by two shadertoys: one and two.

1D Sampling recap

Single dimensional signal sampling is well understood and taught in any signal processing course. In most basic words it is process when you take series of single measurements of some continuous process and every measurement is same distance apart. As long as your source signal has no frequency content higher than half of the sampling frequency (Nyquist-Shannon limit) and you use impractically perfect reconstruction filter, you can reconstruct back the original signal.


Signal containing frequencies Nyquist limit can be reconstructed faithfully under perfect reconstruction filter. Source: Wikipedia.

If your signal contains higher frequencies, they are going to start to alias – manifest themselves as lower frequencies, mixing with existing frequency content and corrupting the signal measurements. In other words, frequencies beyond Nyquist rate are equivalent to lower frequencies under sampling.


Two frequencies can alias under inadequate sampling rate. Source: wikipedia.

2D Sampling

This is where things start to be less covered by literature and more interesting. Process of 2D sampling is process of a series of sequentially performing two 1D sampling operations on two different axes (sampling is separable!).

In the first step we sample a bivariate continuous signal into discrete series (for example in X dimension) of continuous 1D signals (in Y direction) and then sample them again in Y dimension to get a grid of discrete samples. Pixels are not little squares (famous article by Alby Ray Smith). They are grids of samples.

However if you now look at consequences of Nyquist-Shannon theorem of two consecutive sampling operations, you will see that representable frequencies form a box:

2d nyquist.jpg

Most readers have probably seen this type of diagram many times – but I marked there something interesting and non-obvious. If you think about a rectangular Nyquist domain, in diagonal direction higher frequencies (sqrt(2) times higher) can be represented!

This means that lines, triangles and generally features aligned with X or Y axes (or near to them) will alias faster than the ones that are diagonal.

Zone plate and Siemens star

We can see this effect easily in any pattern designed to represent multiple frequencies.

Two commonly used ones are a zone plate and Siemens star.

Side note: I have no idea why they are not used more commonly by real time graphics rendering community when talking about anti-aliasing or post-processing. Much easier to test quality of your algorithms on known all-frequency pattern than just eyeballing on arbitrary content.

This is a zone plate variant:


Variant of a zone plate.

And this is a Siemens star variant:


Variant of Siemens star (animated).

Both are supposed to capture variety of frequencies in a single image.

On both images, when frequencies increase (uniform / isotropic frequency change), on principal X / Y axes you can see aliasing or failed reconstruction artifacts appear much faster! Especially in motion, here magnified the issue on a static image:


Cropped and nearest-neighbor magnified center of Siemens star – notice non-isotropic and non radially symmetric behavior.

Main axes exhibit much stronger aliasing and the appearance of “wrong” frequencies earlier.

Frequency spectrum of a grid being a box answers one of questions – most reconstruction and resampling filters are specified only in a single axis and interpolation is done independently. Why? Goal of interpolation and reconstruction is to remove any frequencies that cannot be represented before they alias (and not remove other ones) and any frequency cutoff will happen much more on those axes where aliasing will happen first.

Rotating the grid

Immediate question that arises would be – what would happen when you rotate your pixel grid by some angle, let’s say for simplicity of analysis 45 degree keeping the same sampling density?

Rotated grid frequency.jpg

By just rotating the area of represented spectrum cannot change – so we are guaranteed that we will gain some frequencies and lose some other ones. In this case we can get better axis aligned representation at the trade-off of more aliased diagonals. This would sound like a potentially attractive trade-off in some context (e.g. architecture or urban rendering), but rotated signal cannot be displayed on an axis-aligned grid screen. It has to be resampled and perfect resampling filters are impractical (large spatial support, potential for ringing of aliased content or noise), so we’d need to lose even more resolution and not filter out perfectly.

It’s not a great option and assuming this 1-1 ratio it’s not surprising that it’s not used in practice (at least I haven’t heard of – let me know in comments!).

But we can take advantage of rotating sampling patterns in different contexts!

Checkerboard rendering

Checkerboard rendering is a form of mixed resolution rendering. Goal here is to be able to reconstruct a full resolution (for example full 4K resolution like in PlayStation 4 Pro) signal from undersampled one. No decimation / removing frequencies should take place if possible. Obviously we cannot make up for what was missing in the first place without using other techniques like temporal supersampling (we can only hallucinate it), but at least we’d want to not have aliasing.

Checkerboard rendering tries to answer a question: how can you optimally render the image given just 50% of computational power?

(I know, concept of “optimally” is a whole topic of its own – what does it mean? but let’s assume most detailed and least aliased).

Uniform rescaling

One option is to render in uniformly rescaled resolution (both dimensions scaled by sqrt(2).


This means that we get uniform detail loss and aliasing in each direction. Vertical and horizontal directions will be even less represented than in our original grid, while we still have potential for expressing higher spatial resolution of diagonal edges.

Biggest issue with uniform rescaling is a need for very high quality reconstruction / resampling filter. Without it upscaling to native (ex. 4K) resolution can result in even more detail loss and aliasing.

Second problem is that no frequencies get fully represented and all content rendered in this lower resolution and resampled will look worse.

Interlaced rendering

Another option would be to just do form of interlaced rendering – so rendering one of dimensions in half resolution. It is a very attractive option as then reconstruction filter can be only 1D, which means better efficiency. We can even go with a very expensive reconstruction filter in that dimension (like a wide support windowed sinc).


While it is cheap, it is also on its not great for rendering quality (under assumption of no temporal antialiasing / extra supersampling). It means that while in one direction edges will be represented as good as with full resolution rendering, in other directions they will be 2x more aliased and with 2x more detail loss.

On the other non-rescaled direction will not suffer from any aliasing from resampling / reconstruction.

Checkerboarded rendering

Finally, there is an option (with some special hardware sauce of adjusting sample positions in a quad plus some others) of doing checkerboarded rendering, so shifting every other row by half. This results in a rotated grid. How well does it perform?


Checkerboarded rendering allows us to represent horizontal and vertical edges as well as full native resolution! Under perfect reconstruction there would be no detail loss there. Frequencies suffering the most are the diagonals, they will be 2x undersampled compared to native resolution (but only sqrt(2) / 2 worse than horizontal / vertical).

Checkerboarded rendering needs a high quality reconstruction filter for it to hold, but even with a naive bilinear filling of missing samples it can look ok.

Compared / together

From Fourier domain perspective, checkerboard rendering looks very attractive. How does it perform in practice? I created a very simple shadertoy demo comparing those methods on a zone plate. Note: it uses most crude reconstruction possible (tent/bilinear).


When using this simple bilinear filters in my opinion checkerboard rendering looks the best (obviously after the full resolution) and only aliasing happens around the main diagonal.

Anti-aliasing rotated grid patterns

Another common use of rotated grids for different frequency aliasing is either super-sampling or hardware-accelerated techniques like MSAA. Goal of such variants or anti-aliasing is to sample at higher frequencies, avoiding aliasing (avoid high frequencies folding over as low frequencies) and before display use a reconstruction filter that will remove them prior to resolving onto final sampling grid.

As resampling/filtering (signal decimation) will happen anyway, we are free to chose any sample position.

Assuming 4x supersampling / MSAA two alternatives that we will consider here is just axis-aligned box and 45 degree rotated box and 4 samples.

four x aa grid.jpg

If we sample 4x with an uniform grid, we get uniformly 2x more non-aliased frequencies.

If we decide to use rotated grid 4x pattern, we get 4x larger rotated box. This one is a bit different. We get a lot more horizontal and vertical anti-aliasing and a bit more diagonal anti-aliasing. Near horizontal and near vertical lines will observe a massive improvement (2*sqrt(2)) and diagonals will observe some / medium improvement (sqrt(2)).

four x aa rotated.jpg

Using rotated grid gives us better representation of horizontal/vertical signal frequencies and smaller gain in diagonal ones.

Side note: this matters mostly when using a very cheap decimation filter like a box filter (for a long time commonly used as MSAA resolve…) that aliases a lot of frequencies around Nyquist – on principal directions it’s lower so it is preferred to not have any corrupted / aliased frequencies there.

In hardware (MSAA)

Interestingly, by default hardware 4x MSAA patterns are neither axis-aligned nor 45 degree rotated (for example in DirectX 11). They are not horizontally/vertically symmetric.


Standard DirectX11 MSAA patterns. Source: MSDN documentation.

Since frequency spectrum will be unevenly rotated, frequency aliasing also will be uneven. So same a frequency rotated let’s say 15 degrees clockwise or counter-clockwise will get different amounts of aliasing. It is an interesting choice and I don’t know if there is a specific reason for it other than my guess “it’s good enough and integer offsets are hardware implementation friendly”.

Edit 05.17.2018: Sebastian Aaltonen wrote in comments the following great explanation of this pattern: N-rooks looks better than 45 degree rotated when you have lots of straight X/Y lines, such as walls/staircases/buildings, especially with box filter. Aligned lines are a common case in games as the camera up vector tends to be locked. When the building edge moves slowly across the pixel in X direction, you get four different blending values with N-rook pattern, while 45 deg rotated grid gives you only 3, and the middle one has 2x weight as the samples overlap. DirectX sampling pattern is as close to 45 degree rotation as you can get, while maintaining even spacing between the samples in both X and Y axis. It is a pretty good compromise for the worst case visible aliasing problem, which is straight lines slightly off X or Y axis.

Here is a shader-toy comparison of those methods (with a naive and suboptimal box filter).



The “ROTATE DX” corresponds to rotation like in DirectX documentation. You can see that more towards horizontal direction there are some more aliased frequencies.

I am not sure if this is very clear, but to me the rotation by 45 degree looks cleanest.


If I was to present just one single main takeaway from this post it would be that signals sampled on a grid can represent frequencies that also correspond to a grid. This means that elements of the image (signals) corresponding to diagonal frequencies will behave differently than horizontal / vertical ones.

This perspective is very important when designing reconstruction filters (separable!), anti-aliasing patterns or looking at mixed resolution rendering.

Second takeaway is less theoretical, more about practice: it is very useful to look at some standardized, all-frequency patterns instead of eye-balling content (especially under some animation / rendering camera). They can be animated as well, but this way you can immediately see effects of choices like anti-aliasing patterns, post-processing techniques etc. on your image. It’s not a replacement for testing and evaluating on real content, but a good starting point and one that might build better intuition and conjunctions why some technique might be better than the other one.

Posted in Code / Graphics | Tagged , , , , , | 5 Comments

Tech and scientific writing – graphics, diagram and graph creation tools

Few days ago, I asked a question on twitter:
“What is industry/academic standard for diagrams in tech writing? PowerPoint/GoogleDocs diagrams? Dedicated tool like Visio/Dia? Procedural?
I usually use first, but far from satisfied
For proc tried SVG,JS,Graphviz,even Mathematica, but results are either ugly or *lots* of work…”

I got 26 replies, thanks everyone who contributed!

To not have it lost and hoping that someone else will find it useful, writing down answers in a few categories with some comments. Please comment on this post if I got anything wrong and/or I omitted something.

Note that there are no real “conclusions”, almost everyone used something else! Probably almost anything on the list will work. But as a rule of thumb, probably it’s useful to have each of:

  • Dedicated vector graphics program (I’ll give a try to Inkscape and Photoshop vector graphics),
  • Favorite scientific environment for data exploration and plotting (Python+Matplotlib, Mathematica, Matlab),
  • Some favorite environment for authoring documents (whether LaTeX, mathcha or just Powerpoint or Google Docs),
  • maybe something interactive for web/blog (shadertoy, D3, Mathbox).

Vector graphics software

Adobe Illustrator, Subscription $19.99/month, Mac/Windows

For many years standard / go-to for vector graphics. At the same time quite pricey subscription model and could be an overkill for simple diagrams?

Affinity Designer, $49.99, Mac/Windows

No subscription alternative to Illustrator. Lots of people recommended it, worth having a look!

Inkscape, Opensource/Free, Linux/Mac/Windows

Totally free and open source. Subjectively UI looks much cleaner than other open source graphics software.

Omnigraffle, $49/$99 iOS, $99/$199 Mac

Recommended by many people, UI looks pretty clean and productive, sadly Mac/iOS only.

Xfig, Opensource/Free, Linux/Mac/Windows

Dormant development, quite ugly interface (IMO). But open source and free.

Dedicated diagram / flow chart software

Microsoft Visio, $299 or $18 subscription, Mac/Windows

Professional industry grade diagram software. Quite expensive…

SmartDraw, Subscription $9.95/month, Mac/Windows/Cloud

Flowchart marker; don’t know if it’s useful for other, more complicated graphs and diagrams.

yEd, Free, Linux/Mac/Windows

Graph and flowchart only (?), free and multiplatform.

Dia, Free, Linux/Mac/Windows

Dormant open-source project. I used it few times and think there are better, more modern tools (?).

Lucidchart, both free and paid options

Don’t know much about it, looks like it’s more about flowcharts, Free, cloud/web based

UI looks very similar to Google Docs, very low entry barrier. Again more about just simple, graph-like diagrams / flow charts.

Visual Paradigm, $349, Linux/Mac/Windows

Another dedicated UML-like, visual diagram tool.

Other software

Microsoft Powerpoint $109.99, Mac/Windows

My go-to drawing solutions for any presentations so far. Works great, supports animations, but is painfully limited.

Photoshop and vector graphics there, Subscription $9.99 a month, Mac/Windows

I’ve never used PS vector graphics, but it’s software I already pay for, so could be a good idea to try it.

Mathcha, free, cloud

This one is very interesting – equation and mathematical graphics (simple plots, diagrams) online editor. Looks like way easier entry / access WYSIWYG LaTeX editor.

Google Docs and family, Free, cloud based

Using this quite a lot at work (what a surprise!) for drawing simple diagrams and it works pretty well. Not leaving document is a big plus, just like with Powerpoint.

Grapher, part of MacOS

Dedicated tool embedded in MacOS; surprisingly powerful for something included in every install!

Interactive web based

Shadertoy, free

This could be a surprising choice, but check out e.g. this algorithm visualization from Leonard Ritter. Code + visualization in one place with animations and optional interactivity!

Mathbox, free, Javascript library

Quite amazing math visualization library for JS and web. Amazing results, but it’s quite a heavy tool that could be overkill for small diagrams.

Flot, free, Javascript library

Interactive plotting for jQuery. Don’t know much about it (not using JS as much as I’d like for creating web pages).

D3, free, Javascript library

Beautiful data visualizations – but not really for generic graphs.

Desmos calculator, free, online

Nice semi-interactive and powerful online mathematical function plotting tool. Good to not just visualize, but also explain reasoning behind some curve look.

Render diagrams (or second link), free, online

A fresh tool specialized in rendering diagrams and for graphics programmers! Has lots of built in primitives that occur very often in such use case / scenario. Looks like it’s early stage, but very promising, looking forward to its develpment!

Procedural languages and libraries

Asymptote, open source

Dedicated vector graphics language.

Markdeep, open source

“Markdeep is a technology for writing plain text documents that will look good in any web browser (…). It supports diagrams, calendars, equations, and other features as extensions of Markdown syntax.”

Limited diagramming capabilities, but all-in-one document creation solution.

Matplotlib, Python, open source

Go-to Python solution for visualizing data. By default not prettiest, but very powerful and “just works”.

LaTeX with PGF/TikZ, Tex, open source

LaTeX extension/language to define graphs and generally, graphics – programmaticly. Would love to learn it some day and become proficient, but even people who recommended it, warned of steep learning curve.

UMLet Free, has also UI

Free tool to create UML diagrams.

Matlab, $2150, Linux/Mac/Windows

The most expensive option of the list – numerical professional mathematical/engineering package with some graphics options among numerous others.

GNU Octave, open source, all platforms

Free and open source alternative to Matlab! Probably not as powerful for most uses, but for graphs and vector graphics should be comparable. Used it at college for computer vision classes and well, it definitely worked fine for most algebraic problems.

Gnuplot, open source, all platforms

Simple command line tool for plots that I used few times at college. For quick plots very convenient.

Mathematica, $320, Linux/Mac/Windows

Affordable mathematical symbolic and numerical analysis and data exploration software. I wrote about it in the past, used it numerous times for plots in many of my blog posts and always recommend. I also played with graph plotting, though found it a little bit frustrating and not customizable/pretty enough by default.

Graphviz, open source, all platforms

One of those “useful but ugly” pieces of software. Good for quickly creating graphs from code, for anything else no great (?).

Svgwrite for Python, open source, all platforms

Python library for writing directly into SVG. Quite low level, but powerful for specific very “procedural” use cases.

GGplot2 for R, open source, all platforms

I don’t know much about R, but my understanding is that is its matplotlib equivalent. If you use R for data exploration, probably a goto solution.

Editing directly PostScript, open source, all platforms

Very low level and hardcore option; maybe easier with outputting ps directly/proceduraly from code?


Posted in Code / Graphics | Tagged , , , , | 3 Comments

Separable disk-like depth of field

This is a short note accompanying shadertoy: .

It is direct implementation of “Circularly symmetric convolution and lens blur” by Olli Niemitalo (no innovation on my side, just a toy implementation) and got inspired by Kleber Garcia’s Siggraph 2017 presentation “Circular Separable Convolution Depth of Field”.


For depth of field effect, Gaussian blurs are seen artistically as “boring”, while hexagonal DoF (popular few years ago) can be subjectively not attractive (artificial, cheap camera qualities). I wrote a bit about bokeh, DoF and some crazy implementation in Witcher 2 in the past. Working on Far Cry 4 and other titles, I used a different approach – scatter as gather “stretched” Poisson bokeh to compute DoF together with motion blur and at the same time.

Circular kernels can be very expensive; prone to noise, ringing etc. One can try to approximate them, like Guerilla Games in Michal Valient’s Killzone presentation, undersample with blur afterwards like Crytek in Tiago Sousa’s presentation, or try to use some separable approaches like Colin BarrĂ©-Brisebois.

Unfortunately, there are no circularly symmetric separable filters other than Gaussian filter in real domain. However, in complex domain, one can find whole family of functions (complex phasors multiplied by Gaussian “bell”) that their magnitude is! This is what Olli Niemitalo’s post describes and introduces some “fitted” functions to approximate disk DoF.

Results – quality

As a proof-of-concept, I implemented it here: . It has a version with a single component and a version with two components (“harmonics”).

Version with a single component produces quite strong ringing, but much more “pronounced” bokeh than Gaussian one:


Circular bokeh single component approximation vs Gaussian.

Version with two components is going to be twice more expensive memory and ALU-wise, but doesn’t have those artifacts so strong:


Two component vs single component approximation. Notice less ringing and smaller passband – almost perfect, circular shape.

I personally don’t think this ringing or “donut hole” is unpleasant at all; IMO it resembles some older lenses with “busy bokeh” and is something we simulated in Witcher on purpose:

Witcher 2 artistic “busy bokeh”.


Real world lens “busy bokeh” with some ringing visible.

Results – implementation / performance

Shader toy implementation I provided is obviously not optimal. Recomputing weights and their sum is very wasteful; most of this can be precomputed and stored in an uniform / constant buffer. Instead I stored most of them in first texture / buffer (“Buffer A”).

If we exclude weights, per every component first (“horizontal”) pass requires a “simplified” real times complex multiply and accumulate (2 madds, one per real and one per imaginary component).

Second (“vertical”) pass requites full complex multiplies and accumulates – however after applying optimization from Olli we can keep real component only – (2 madds).

So assuming that there are N taps in both horizontal and vertical directions, every component needs roughly 4N “full rate”/simple instructions – not bad at all. For two components it’s 8N instructions . Both variation just read normal amounts of memory in the first pass (one fetch per tap), but on 2nd pass have either 2x or 4x more memory bandwidth required.

Similarly larger are the storage costs – every RGB channel is multiplied by 2 and by number of components. So using a single component we’d need 6 floating point channels (that can get negative – complex phasor!), for two components 12, so probably 4 RGBA16F textures.

In general, like with many post effects memory bandwidth is definitely a concern here; this technique requires even more of it and can get pretty expensive, but most blurring can be done in local / groupshared memory in compute shaders; and one can do some optimizations like blurring in YCoCg space and storing chroma components in lower res etc. So in general I definitely think it’s practical and cool approach.

In general I really like this technique, I don’t know if I’d use it in practice (still see lots of benefit in combining DoF and MB), but find it really beautiful and elegant, especially maths behind it. 🙂


Circularly symmetric convolution and lens blur“, Olli Niemitalo

“Circular separable convolution depth of field”, Kleber Garcia

Killzone: Shadow Fall Demo Postmortem“, Michal Valient

Graphic gems of CryEngine 3“, Tiago Sousa

Hexagonal Bokeh Blur Revisited“, Colin BarrĂ©-Brisebois

Posted in Code / Graphics | Tagged , , , , , , , , , , | 2 Comments

Cull that cone! Improved cone/spotlight visibility tests for tiled and clustered lighting

In this blog post, I will present some common spotlight culling techniques and propose one small improvement that is able to correct results of cheap, tiled / clustered culling on the CPU/GPU with almost negligible ALU cost. If you know a lot about tiled light culling, feel free to skip till the last section.

The idea for this post came from the fact that many people talking about light culling talk mostly about spherical point lights. However in many practical applications point lights are actually rare! It is not uncommon to implement even point light rendering and their shadow casting not through cubemaps for shadowmaps, but through 1-6 clipped spot lights (getting the advantage of being able to vary their resolution, cull intersecting geometry separately and compute optimal near/far planes).

Main problem with culling spotlights is that unlike spherical / point lights, many widely used tests cause some overdraw / false positives. With physically-based correct light falloffs and BRDFs, light can get really huge (specular component can be seen very far away) and most common tests are not suited for culling of large primitives. I will have a look at few possible techniques in following sections.

Note: in sample code you will see instructions like sin or cos – obviously you don’t want to call them, but precompute them ahead – this is only for demonstration purposes.

You can find Mathematica notebook that comes with this post here and the pdf version of it here.

Introduction / assumptions

Talking about generic collision tests of spot lights, I am going to approximate them as cones (extra cap is easy to add; in most of those tests it can be skipped though). I will talk about culling them against mini-frustums with focus on tiled lighting – tiled deferred and clustered forward lighting – but they apply in theory to any kind of culling technique. In theory it could also be applied to tiled forward+ for transparent object lighting, however it won’t benefit that much from some of mentioned optimization techniques due to very long frusta. In those cases it is better to use clustered shading anyway! In general, clustered shading is preferable and can be made extremely efficient with some low level hardware tricks, for example as presented by Tiago Sousa and Jean Geffroy.

In most of this post, I will assume that tile frustum is not extremely long, so has relatively uniform dimensions. While it could be considered limiting, in practice it is possible to achieve that assumption even withing just classic tiled deferred by splitting tiles depth bounds based on the depth distribution.

I will also demonstrate ideas here in 2D for simplicity, but they extend one to one to 3 dimensions (unless noted otherwise).

Simple approach – cone as a sphere

First approach is quite trivial. Instead of dealing with a spotlight / cone, we can have a look at its bounding sphere. Obviously taking equivalent pointlight bounding sphere would be extremely wasteful, so instead we want to find an exact bounding sphere of a cone. To find it, we can have a look at just 2 cases: either triangular cone slice is obtuse or not.


Cone slice bounding sphere

In case of obtuse triangle, bounding sphere’s center will be on the cone cap with radius equal to the radius of the cap. In the acute case, we want to find a circumscribed triangle. There is a very easy formula for finding one based on one of triangle angles and length of opposite side, it is A / sin(a). After taking into account half angle and half-diameter and doing some trigonometric reductions, it becomes trivial 1/(2cos(halfAngle)).

float4 boundingSphere(in float3 origin, in float3 forward, in float size, in float angle)
	float4 boundingSphere;
	if(angle > PI/4.0f)
	{ = origin + cos(angle) * size * forward;
		boundingSphere.w   = sin(angle) * size;
	{ = origin + size / (2.0f * cos(angle)) * forward;
		boundingSphere.w   = size / (2.0f * cos(angle));

	return boundingSphere;

This can fail in many situations even with a simplest plane vs sphere tests:



However in some it will work reasonably well against a single plane. Having precomputed the spheres on the CPU or just ahead of time, it is extremely cheap test to perform.


Unfortunately, this test it suffers from a “corner” problem typical to all primitive vs planes partial swept line tests:


Here is a stop frame from this animation, you can see how every plane test succeeds and cone / sphere are not culled by any of them, so the whole bounding sphere cannot be culled:


Thomas Gareth proposed a simple trick / workaround: precise, Minkowski sum culling of a sphere against bounding box of the frustum.

Here is how it performs in the same case, simple and cheap box test against the sphere gives expected result without false negatives:


Different approach – plane cone tests

While not forgetting about frustum planes test failures, we can try addressing those very thin or very wide cones with better plane vs primitive collision test and see how it’d work in practice.

The main idea is simple – find a closest point on the primitive to the plane and test it against that plane. In 2D this is trivial – the point will be the closest one of 3 triangle corners:


3D case

In 3D it gets a bit more tricky, but same assumptions remain. But this time we either test the cone apex or closest point on the cap rim. Idea one might to do it efficiently comes from Christer Ericson’s book “Real-Time Collision Detection”.

Let me visualize first how the algorithm works:


We are trying to find vector V2 that is closest to the plane surface. To do it, we first compute a vector V1 that is perpendicular to both plane normal and the cone “forward” vector. This can be done by performing a simple cross product. We also get a guarantee that this vector will be on the cone cap (since cone cap plane has all vectors perpendicular to cone axis). Then we do yet another cross product with the cone axis – to get a vector perpendicular to both the cone axis (so it will still be on the cone cap) as well as this vector perpendicular to the plane normal. It means it will be a vector that points in direction most similar to the plane normal or in the opposite direction, depending on the order of cross product operations.

It is very important here to not forget a “safe” normalize, since depending on the sine of the initial angle between plane normal and the cone orientation, result can be a very short vector. If it is zero, we can totally ignore it as it means that cone cap is parallel to the plane and we need to only check the apex.

Here we can see how this works in practice for a test that detects an intersection:


Finally, we need to also check the cone origin point – since cone cap can be quite far away from the plane, but the cone intersect it anyway:


We can code it easily in hlsl for example like this:

bool TestConeVsPlane(in float3 origin, in float3 forward, in float size, in float angle, in float4 testPlane)
	const float3 V1 = cross(, forward);
	const float3 V2 = cross(V1, forward);

	const float3 capRimPoint = 	origin +
								size * cos(angle) * forward +
								size * sin(angle) * V2;

	return dot(float4(capRimPoint, 1.0f), testPlane) >= 0.0f || dot(float4(origin, 1.0f), testPlane) >= 0.0f;

Back to a 2D example

How does it perform? In theory it could be better (we can get much closer than the bounding sphere and still get some culling), but it is still suffering from the same problems like all frustum plane tests:


In practice, this is a huge problem. Just have a look in a a little bit contrived scenario of a regular grid:


This looks almost totally useless!

There are few solutions to it. I will mention first a heavy / complicated, but best solution, suggested by Matt Pettineo – rasterizing the cone in tested space and then either simply checking against this rasterized mask, or doing proper, rasterized computation of tile / cluster bounds in 2D. It is not so easy to implement, requires smart tricks to work around lack of conservative rasterization on most available consumer hardware and adds extra passes to the rendering pipeline. I would suggest doing it this way if you have time, but I will propose a bit coarser, but very cheap and simple solution if you don’t.

Flipping the problem

In the idea I propose (I am sure it is not new/revolutionary; however haven’t seen it in any published material related to tiled light culling) I got inspired by both Thomas Gareth and Inigo Quilez. Instead of doing partial separating axis test by testing frustum planes, in case of small frusta against large primitives, we should do the opposite!

Just test the cone against some other primitive.

What other primitive? The easiest one; a sphere – their symmetry makes Minkowski-sum tests quite trivial. Just find a furthest point on the tested primitive towards the sphere center and perform a distance check.

Here is how it would work geometrically:


We can see a vector V that goes from cone origin towards the sphere. Then this vector V is decomposed into perpendicular vectors V1 and V2. V1 is projection of V onto the cone axis that is clipped against cone axis length, while V2 is V minus the projection – clipped against tan(coneAngle) times V1 length.

This way we get a point that is closest to the cone and we can simply check the distance of this point from the sphere center against the radius of the sphere. Note that this is the correct test, giving accurate results in every case – but it is quite expensive (many reciprocals) and has relatively large register pressure (operations on whole 3D vectors).

We can slightly simplify it by not actually decomposing the vector and just comparing few distances – for example as explained by Charles Bloom. Please note that his original test is very simplified and actually doesn’t test the cone range / behind the cone situation, something like:


To fix it without adding too much extra cost, I’d add some simple distance checks for the distance of the V1 on the cone axis from cone start/end, so the final simplified test would have only one small case for false positive, when both axis and angle/distance check fail (very small area for small sphere radii):


Final, corrected version in hlsl might look like that (after multiplying by sin(a)):

bool TestConeVsSphere(in float3 origin, in float3 forward, in float size, in float angle, in float4 testSphere)
    const float3 V = - origin;
    const float  VlenSq = dot(V, V);
    const float  V1len  = dot(V, forward);
    const float  distanceClosestPoint = cos(angle) * sqrt(VlenSq - V1len*V1len) - V1len * sin(angle);

    const bool angleCull = distanceClosestPoint > testSphere.w;
    const bool frontCull = V1len >  testSphere.w + size;
    const bool backCull  = V1len < -testSphere.w;
return !(angleCull || frontCull || backCull);

Is it worth? Let’s have a look at previous grid test (overlapping spheres from tiny boxes):


I’d say – absolutely yes. 🙂 With frusta that tend to be thinner/longer, the gain won’t be that great (as bounding spheres volume starts to get much larger than frustum volume), but it is very cheap test (cheaper than testing 6 planes!) and still works really well.



I highly recommend trying sphere vs cone tests, even if the primary primitive for which you are culling is not a sphere at all and it might seem that a sphere will be too conservative. It is also not all or nothing – you can add this relatively cheap test on top of your other, different and existing tests and you are not sacrificing anything (other than some CPU / GPU cycles).

In general, I would also discourage frustum plane tests in probably any case when the tested primitives might be large compared to frusta – you almost always can do better, even with tests that might have seemed more “conservative”. If you don’t have that many lights, you can keep plane tests for some marginal culling extra efficiency in some cases (e.g. cone parallel and close to a very thin frustum). When in doubt – visualize!

And for a general advice, it is often worth looking at the problem from the opposite side / flipping it around. Obviously if you spend too much time on the problem on your own, it often takes talking to your colleagues just to realize that there might be a different solution. Changing domain is extremely common idea in mathematics (integration domain change, variable change, relative coordinate system change etc.), but I tend to often forget about it while solving “easier” CS problems.

References “Forward+: Bringing Deferred Lighting to the Next Level”, Takahiro Harada, Jay McKee, and Jason C.Yang “Deferred Rendering for Current and Future Rendering Pipelines”, Andrew Lauritzen “Clustered Deferred and Forward Shading”, Ola Olsson, Markus Billeter, and Ulf Assarsson “The devil is in the details: idTech 666”, Tiago Sousa, Jean Geffroy “Advancements in Tiled-Based Compute Rendering”, Gareth Thomas Inigo Quilez on “correct” frustum culling. Christer Ericson, “Real-time collision detection” “Bindless Texturing for Deferred Rendering and Decals” – Matt Pettineo “View Culling”, Charles Bloom

Posted in Code / Graphics | Tagged , , , , , , , , , , , , | 9 Comments

Small float formats – R11G11B10F precision

While this post is not yet dithering related, it is in a way a part of my series about dithering. You can check index of all parts here or check the previous part.

I will talk here about use of not very popular / well knows format R11 G11 B10 Float (R11G11B10F) format – its precision, caveats and how to improve it.

I want to note here that this post will not touch on many float subtleties, namely NaNs, denorms and infinities. Lots of GPU shader compilers use fast math anyway (unless asked to do strict IEEE compliance) and ignore them – and programmers have to be double careful when their used values.

You can find Mathematica notebook for this post here and corresponding pdf here.

Update: I updated section about losing dynamic range in denorm range after correction from Tom Forsyth and Peter Pike-Sloan that GPUs are standardized to support denorm on write to small floats.

Problem description

Most common representation of colors in rendering is not integer / natural / rational number representation, but floating point representation. Floating point numbers and their large range are useful for few different reasons, but the most important are:

  • Encoding HDR values and lighting,
  • Need for fractional values when operating on multiple colors, mixing them, filtering with filters with fractional or negative weights,
  • Need for larger precision in darker areas without any gamma encoding,
  • Need for bound relative quantization error (constant upper bound relative to signal magnitude),
  • Fact that floating point numbers are “natural” representation for GPUs (for a long time GPUs didn’t have any integer number support or it was “emulated” using float operations… And still some integer operations are slower than floating point operations).

That said, rendering techniques very rarely store 32bit floating point values even for HDR color – because of both memory storage cost as well as performance. Memory bandwidth and caches are usually most sacred resource and simplistic rule of thumb is “ALU is cheap, memory access is expensive”. Even simplest memory access operations have latencies of hundreds of cycles (at least on AMD GCN). Furthermore, cost increases when texturing unit is used – as filtering operations get more and more expensive and operate with slower rates.

Therefore, rendering programmers usually use smaller float formats as intermediate in-memory storage – 2 most common being RGBA16F (4 16bit half float channels) and R11G11B10F (channels R and G having 11 bit small float and channel B using 10 bit small floats).

Let’s have a look at the difference between those formats and full 32bit IEEE float. If you feel comfortable with float representation, feel free to skip the next section.

Floats – recap

I am assuming here that reader knows how floating values are represented, but as for a reminder – typical floating point value is represented by some bits for:

  • sign – just sign of the number, max single bit value and optional (more later),
  • exponent – some bits that are represented in biased, integer format and describe biased exponent of number of 2 before multiplying with rest of the number,
  • mantissa – some bits representing the fractional part of the number before multiplying by exponent. It is assumed that there is a leading 1, decimal point, so for example mantissa of 01011000 corresponds to number 1.01011000 represented binary (in base of 2).

Therefore final typical number is sign(+/- 1) * 2decoded exponent * 1.mantissa.
There are lots of “special” cases of floats that use special smallest and largest values of exponent (denorms, infinity, NaN, zero), but for the purpose of this post, we will have a look later at only one special case – encoding of zero – it is achieved by putting all exponent and mantissa bits to zero. (note: because sign can be still set, there are two zeros, +0 and -0).

Floating points are a very clever representation with lots of nice properties (for example positive floats interpreted as integers can be sorted or atomically min/maxed! Or that integer zero corresponds to just positive float zero), however come with many problems with precision that are not always the most intuitive. I will be mentioning here only some of them – the ones that are relevant to discussed problem.

Regular and small floats

So far I was trying to stay very generic and not specify any bit numbers, but to use floats in hardware (or software emulation), we need to define them.

Here is a table showing various bit depths of regular 32 bit floats as well as half floats and 11 and 10 bit floats as used by graphics hardware / standards:

Bit depth Sign bit present? Exponent bits Mantissa bits
32 Yes 8 23
16 Yes 5 10
11 No 5 6
10 No 5 5

We can immediately see few interesting observations:

  • 11 and 10 floats do not have sign bit! This decision was probably driven by the fact that they have already quite poor precision for most of uses, so they were designed in graphic APIs only to store color; using a sign bit here would be an extra waste.
  • 16 bit “half” floats and 11 and 10bit floats all have same exponent! This is pretty interesting choice, but it guarantees that they can represent +/- similar range of values. Exponent of 5 guarantees that values can go to 65500 and 65000 (depending on their mantissas), which is pretty large even for HDR lighting (unless using non-biased, absolute exposure values or doing increasing precision trick I will cover later). Exponent can be negative, so we can go to similarly (“one over”) low values.
  • Mantissa suffers the most. The difference is quite crazy – 23 vs. 5 bits in the worst case! We are dropping 18 bits of precision. This is very unfortunate information, as it means that relatively, between numbers that are in similar range (similar exponent), we are losing lots of precision.

Also, because of different bit depths of 11 11 10 float format, problem arises from different mantissa bit depths of blue channel and other channels – it will produce various discolorations and hue shifts – similar to ones that appear often in BC1 block compression (with 565 endpoint bit depths), but not being green/purple, but yellow/blue instead. I will show an example of it later in the post. Obviously, this decision makes sense – 11 11 10 format fits nicely in a single dword and perceptually, human vision is least sensitive to blue channel.

So as we see, we are dropping lots of information by converting 32 bit floats to 16 or 11/10 bits. Furthermore, information loss is not proportional between exponent and mantissa – in every small float case, we lose much more information in the mantissa. This can lead to some quantization and banding errors.

Before analyzing quantization, one thing is worth mentioning – IEEE standard defines few different rounding modes (e.g. to nearest, to zero, to +inf and to -inf). I don’t think they are in any way configurable on GPUs (at least in standard, cross vendor APIs) and I will write rest of the post ignoring this complexity and assuming that simplest rounding is used.

Small float mantissa precision – concrete example

I hope that previous section and looking at some numbers for bit depths shows clearly problem of losing lots of numerical precision of smaller format floating point numbers because of very small mantissa.

First, some numerical example. Let’s take 3 simple, 8 bit integer values and represent them as a float in range 0-1 – common operation for colors.

N[252/255, 8]

N[253/255, 8]

N[254/255, 8]

Let’s try to represent them as floats. Using knowledge about float values and knowing that mantissa always starts with one, we need to multiply them by 2 and exponent will be 2-1.

After multiplication we get:

BaseForm[N[2*252/255, 8], 2]

BaseForm[N[2*253/255, 8], 2]

BaseForm[N[2*254/255, 8], 2]

I highlighted the first 5 bits, why? Recall that 10-bit half float has only 5 bits of mantissa! Therefore 10bit half floats (blue channel of R11 G11 B10F) cannot represent accurately even 3 almost-last 8 bit color values! At the same time, you can see that the next bit actually differs – therefore those 3 numbers will produce 2 different values in 11F and produce wrong coloration of white values.

Small float mantissa precision – visualized

Ok, so we know that small floats cannot represent accurately even simple 8bit luminance! But how bad they really are? I created some Mathematica visualizations (see top of the page for link) – first for the worst case, B10F, so dropping 18 bits of mantissa.


Things look ok (or even much better – not surprising given how floats are encoded!) close to zero, but error starts increasing and is almost 4x larger close to one compared to linear 8 bit values quantization error!

This comparison however is quite unfair – we don’t use 8bit linear color because of perceptual sensitivities to darks vs brights (“gamma”) and use sRGB instead, so don’t care as much about those bright areas and decide to encode more information into darker parts. This is how comparison of those 3 methods of encoding look like:


Ok, things are a bit more even. Looks like 10bit float precision is a bit better for values up to linear 0.125, but later get worse. Maximum error is almost 2x larger around 1 for 10 bit floats, not great… This will create visible bands on smooth gradients.

Just for fun, extra visualization, relative error (divided by original value):


As expected, float value quantization relative error is bounded and has a maximum in ranges corresponding to next exponents (if we don’t count here going lower than minimum normalized float representation), while 8 bit linear or sRGB relative errors increase as we approach zero. Floating point relative error is also represented in “bands” corresponding to next exponents and getting 2x larger between 2 adjacent bands.

We will have a look at how to improve things a bit, but first – a bit more about a second problem.

Small float uneven mantissa length problem

Because R11G11B10 floats have uneven mantissa bit length distribution, they will quantize differently. How bad is it? As always with floats, absolute error depends on the range:


The larger the number – the higher the error. In last part of the plot it looks pretty bad:


What this different quantization mean in practice? It means that there will be discoloration / wrong saturation of the signal. Let’s have a look at a simple gradient from 0.5 to 0.6.


This is very bad (if you have a good monitor / viewing conditions). And now imagine that art director that you work with likes contrasty, filmic look with saturation boosted:


This looks quite unusable… We will have a look at improving it. In this post by changing the signal dynamic range, in the next post by dithering.

Rescaling will not work

Quite common misconception is that it is enough to multiply a float by large number, encode it and divide after decode. It is not going to work, for example, let’s see quantization error when premultiplying by 16:8bitsrgb_vs_10float_relative_vs_premultiplied.png

Zero difference at all! Why? Let’s think what it means to divide by 16 in float representation. Well, mantissa is not going to change! Only thing is that we will subtract 4 from the exponent. So relative error due to mantissa quantization will be exactly the same. One can try to multiply by a number between 1/2 and 2 and we will see a difference in ranges shifting, but it is going to only shift error to either more white or more dark parts:


Error bands only slide left or right.

Improving error by applying some gamma

Let’s have a look here at a different method – that will take advantage of the fact that probably (if image is pre-exposed!) we don’t care about extremely small values, where most precision is positioned (to achieve bound relative precision).

I mentioned in my previous post about dynamic range commonly used workaround for shifting precision precision problems – stretching the dynamic range by taking some power of the signal (smaller or larger). For storing higher precision dark areas of images in integers, we wanted to take lower power for encoding – for example famous gamma 1/2.2. However, in this case we would like to do… the opposite! So taking larger power – to understand why, just look at the original comparison where we introduced sRGB variant:10bit_floatvs8bit_srgb.png

We rescaled blue plot from constantly oscillating in fixed bounds to one that grows. Here with 10bit floats the problem is opposite – we have a function that asymptotically grows too quickly – we want to undo it.

Think a bit about it, it’s quite interesting problem. It has a lot to do with the way floats precision is distributed – it is non-linear, logarithmic distribution that handles large dynamic ranges very well; furthermore, exponential-like signal curve will be represented almost linearly! Therefore to take the most from our floating point representation with low bit depths, we would like to increase dynamic range as much as we can prior to encoding. We can do it by for example squaring the signal or taking larger powers. For the initial 3 floats that I used this requires actually quite large exponent, 3 for given values:

BaseForm[N[2*(252/255)*(252/255)*(252/255), 8], 2]

BaseForm[N[2*(253/255)*(253/255)*(253/255), 8], 2]

BaseForm[N[2*(254/255)*(254/255)*(254/255), 8], 2]

Note how they are different (though first two will round the same way).

Let’s have a look at absolute error with applying gamma 3 (note: this graph assumes correct denorm handling, more below):


Our error looks asymptotically smaller than 8bit sRGB error – this could be already quite useful storage base. Our previously banded gradient also looks better, as well as its higher contrast version (though not perfect – recall that contrast kind of redoes the gamma):





Before with contrast:


After with contrast:


There is no free lunch though!

First of all, there is ALU cost. As we do this operation per 3 channels, it can get quite significant! Taking x*x*x is 2 full rate operations, but for example pow(x,1/3) is log2 + exp2 + multiply, so 2 quarter rate + 1 full rate = 9 FR instructions per color channel! Cheapest variant is just squaring and sqrt(x) is a single quarter rate instruction = equivalent of 4 FR instructions.

Secondly, this data is now obviously not filterable / blendable… Blending in this space wold ceate over-brightening. This can be an issue (if you need hw blending or to resample it with bilinear taps) or not (if you can do it all manually / in a compute shader).

Thirdly, this extra precision is achieved by sacrificing the dynamic range. It is +/- equivalent to dividing abs value of exponent by the used gamma. So for example, with gamma 3 our maximum representable value will be around pow(65000,1/3) ~= only 40! Is it HDR enough for your use? If pre-exposing the scene probably yes, but hottest points will be clipped… The squared variant looks much better, as around 250+.

Potential problem with small numbers

Note: this section got slightly rewritten after correction from Tom Forsyth and Peter Pike-Sloan.  My assumptions were pessimistic (denorm flush to zero), but apparently, GPUs in for example DirectX are obliged to handle them correctly. Thanks for noticing that!

Another problem could be in a different part – smallest representable numbers. The same abs of exponent division is applied to smallest representable numbers! Therefore smallest normalized representable number after applying gamma 3 will be 0.03125, which is around 8/255 and if we don’t have denorms or denorms are flushed to zero, this would result in a clip! Without handling denorms, the zoomed-in actual graph of error would look:


As the graph would look:


You could try to fix it by preexposing for example by 4:


But not only it’s not perfect, but also you’d start losing it again from the top range. (hottest representable values) Instead of already limiting 40, you’d get only 10! This is probably not enough even for displaying the signal on a HDR TV…

Therefore, if denorms were not handled correctly, I’d rather recommend to stick to gamma 2 with preexposure of 4 and accept the slightly higher quantization errors:



Fortunately, as I got corrected – this is not the case and we can assume that denorms will be handled – so can use those higher exponents if needed – only thinking about how much dynamic range we are sacrificing in the upper/higher parts.

Before finishing this section, interesting side note: have you ever considered how low is normalized float precision when operating on 16 bit floats? Half floats have same exponent bit depth, so if you apply contrast operations to them, you might be entering denorm range very quickly! Which theoretically could result in clipping.

Untested idea – using YCoCg color space?

Some interesting (?) idea could be trying to use some different color space like YCoCg or similar instead of RGB. In (signed) YCoCg smaller chroma = smaller magnitudes of Co Cg components = more precision. This would help decorrelate color channels and avoid ugly chroma shifts when the color is less saturated (and when those shifts are more visible).

Unfortunately, R11G11B10 has no sign bit available – we would need to store 2 extra sign bits “somewhere” (different surface? lowest bit of mantissa / highest bit of exponent?).

Summary – to use R11G11B10F or not to use?

R11G11B10 and small 11 and 10 bit floats have many limitations, but are also extremely compelling storage format. They halve memory storage and bandwidth requirements compared to RGBA16F, are capable of storing high dynamic range signal and after some numerical tricks also provide precision acceptable in most color encoding scenarios. I use them a lot to non critical signals (ambient buffer, many post effects buffers), but I think that they are practical also for regular color buffers if you don’t need alpha blending or filtering and can tinker with the input data a bit.

Update: I got information from Volga Aksoy and Tom Forsyth that Oculus SDK now supports and recommends outputting into this format, so it is definitely practical. Because of darker / perfect viewing conditions with a HMD, human perception is much more sensitive in darks and R11G11B10F performs better than 8bit sRGB in this lower range.

In the next post I will show how to dither floats and get even better results with almost no perceived banding (trading it for noise).

Bonus – comparison with 10bit sRGB

As a small bonus, simple comparison with 10bit sRGB encoding (no hardware support, but some video out libraries support it to allow for more precise color profile / curves conversions). Two plots show error in full 0-1 range and in 0-0.1 darks range.



We can see that 10bit sRGB is much more superior throughout most of the range, but in very low/dark values 10bit floats are either equivalent or even a bit more superior.

References Steve Hollasch, “IEEE Standard 754 Floating Point Numbers” Mathematica help – Convert floating point representation to any scientific notation & back Direct3D 10 Floating point rules

Posted in Code / Graphics | Tagged , , , , , , , , , | 6 Comments

Dithering part three – real world 2D quantization dithering

In previous two parts of this blog post mini-series I described basic uses mentioned blue noise definition, referenced/presented 2 techniques of generating blue noise and one of many general purpose high-frequency low-discrepancy sampling sequences.

In this post, we will look at some more practical example – use of (blue) noise in 2D image dithering for quantization!

You can find Mathematica notebook for this post here and its pdf version here.

Bit quantization in 2D

Finally, we are getting to some practical use case. When you encode your images in 8 bits (typical framebuffer) – you are quantizing. When you encode your GBuffers, render targets and others – you are quantizing. Even typical 8 bit is enough to cause some banding on very smooth, slow gradients – like skies or lens flares.

We will cover here a more extreme example though – extreme 3 bit quantization of a linear gradient.



We call those quantization artifacts – 8 visible bands the “banding” effect.

As we learned in previous parts, we can try to fix it by applying some noise. At first, let’s try regular, white noise:


Doesn’t look too good. There are 2 problems:

  • “Clumping” of areas, identical to one we have learned before and we will address it in this post.
  • Still visible “bands” of unchanged values – around center of bands (where banding effect was not contributing to too much error.


Error visualized.

Those bands are quite distracting. We could try ti fix them by dithering even more (beyond the error):


This solves this one problem! However image is too noisy now.

There is a much better solution that was described very well by Mikkel Gjoel.

Using triangular noise distribution fixes those bands without over-noising the image:


Use of triangular noise distribution.

Since this is a well covered topic and it complicates analysis a bit (different distributions), I will not be using this fix for most of this post. So those bands will stay there, but we will still compare some distributions.

Fixed dithering patterns

In previous part we looked at golden ratio sequence. It is well defined and simple for 1D, however doesn’t work / isn’t defined in 2D (if we want it to be uniquely defined).

One of oldest, well known and used 2D dithering patterns is so called Bayer matrix or ordered Bayer. It is defined as a recursive matrix of a simple pattern for level zero:

1 2

3 0

With next levels defined as:

4*I(n-1) + 1  —– 4*I(n-1) + 2

4*I(n-1) + 3  —– 4*I(n-1) + 0

It can be replicated with a simple Mathematica snippet:

Bayer[x_, y_, level_] :=
Mod[Mod[BitShiftRight[x, level], 2] + 1 + 2*Mod[BitShiftRight[y, level], 2],
   4] + If[level == 0, 0, 4*Bayer[x, y, level – 1]]


8×8 ordered Bayer matrix

What is interesting (and quite limiting) about Bayer is that due to its recursive nature, signal difference is maximized only in this small 2×2 neighborhood, so larger Bayer matrices add more intermediate steps / values, but don’t contribute much to any visible pattern difference. Therefore most game engines that I have seen used up to 4×4 Bayer pattern with 16 distinctive values.

If you plot a periodogram (frequency spectrum) of it, you will clearly see only 2 single, very high frequency dots!


2D periodogram – low frequencies are in the middle and high frequencies to the sides.

Obviously signal has some other frequencies, but much lower intensities… Plotting it in log scale fixes it:


So on one hand, Bayer matrix has lots of high frequency – would seem perfect for dithering. However presence of strong single frequency bands tends to alias it heavily and produce ugly pattern look.

This is our quantized function:


If you have been long enough playing with computers to remember 8 bit or 16 bit color modes and palletized images, this will look very familiar – as lots of algorithms used this matrix. It is very cheap to apply (a single look up from an array or even bit-magic ops few ALU instructions) and has optimum high frequency content. At the same time, it produces this very visible unpleasant patterns. They are much worse for sampling and in temporal domain (next 2 parts of this series), but for now let’s have a look at some better sequence.

Interleaved gradient noise

The next sequence that I this is working extremely well for many dithering-like tasks, is “interleaved gradient noise” by Jorge Jimenez.

Formula is extremely simple!

InterleavedGradientNoise[x_, y_] :=
FractionalPart[52.9829189*FractionalPart[0.06711056*x + 0.00583715*y]]

But the results look great, contain lots of high frequency and produce pleasant, interleaved smooth gradients (be sure to check Jorge’s original presentation and his decomposition of “gradients”):


What is even more impressive is that such pleasant visual pattern was invented by him as a result of optimization of some “common web knowledge” hacky noise hashing functions!

Unsurprisingly, this pattern has periodogram containing frequencies that correspond to those interleaved gradients + their frequency aliasing (result of frac – similar to frequency aliasing of a saw wave):


And the 3D plot (spikes corresponding to those frequency):


Just like with Bayer, those frequencies will be prone to aliasing and “resonating” with frequencies in source image, but almost zero low frequencies given nice, clean smooth look:


Some “hatching” patterns are visible, but they are much more gradient-like (like the name of the function) and therefore less distracting.

Blue noise

Finally, we get again to using a blue noise pre-generated pattern. To recall from previous part, blue noise is loosely defined as a noise function with small low frequency component and uniform coverage of different frequencies. I will use here a pattern that again I generated using my naive implementation of “Blue-noise Dithered Sampling” by Iliyan Georgiev and Marcos Fajardo.

So I generated a simple 64×64 wrapping blue noise-like sequence (a couple hours on an old MacBook):


It has following periodogram / frequency content:


And in 3D (who doesn’t love visualizations?! 😀 ):


Compared to white noise, it has a big “hole” in the middle, corresponding to low frequencies.


White noise vs blue noise in 2D

At the same time, it doesn’t have linear frequency increase for higher frequencies, like audio / academic definition of blue noise. I am not sure if it’s because my implementation optimization (only 7×7 neighborhood is analyzed + not enough iterations) or the original paper, but doesn’t seem to impact the results for our use case in a negative way.

Without further ado, results of dithering using 2D blue noise:

It is 64×64 pattern, but it is optimized for wrapping around – so border pixels error metric is computed with taking into account pixels on the other side of the pattern. In this gradient, it is repeated 2×2.

And this is how it compared to regular white noise:


White noise vs blue noise

Because of high frequency contents only, you can’t see this problematic “clumping” effect.

It also means that if we oversample (like with all those new fancy 1080p -> 1440p -> 2160p displays), blur it or apply temporal (one of next parts), it will be more similar to original pattern! So when we filter them with 2-wide Gaussian:


Left: Gaussian-filtered white noise dithering. Right: Gaussian-filtered blue noise dithering.

And while I said we will not be looking at triangle noise distribution in this post for simplicity, I couldn’t resist the temptation of comparing them:


White noise vs blue noise with triangular distribution remapping applied.

I hope that this at least hints at an observation that with a good, well distributed large enough blue noise fixed pattern you might get results maybe not the same quality level of error diffusion dithering, but in that direction and orders of magnitude better than standard white noise.

All four compared

Just some visual comparison of all four techniques:


White noise, blue noise, Bayer, interleaved gradient noise


And with the triangular remapping:




My personal final recommendations and conclusions and here would be:

  • Whenever possible, avoid ordered Bayer! Many game engines and codebases still use it, but it produces very visible unpleasant patterns. I still see it in some currently shipped games!
  • If you cannot spare any memory look-ups but have some ALU, use excellent interleaved gradient noise by Jorge Jimenez. It produces much more pleasant patterns and is extremely cheap with GPU instruction set! However patterns are still noticeable and it can alias.
  • Blue noise is really great noise distribution for many dithering tasks and if you have time to generate it and memory to store it + bandwidth to fetch, it is the way to go.
  • White noise is useful for comparison / ground truth. With pixel index hashing it’s easy to generate, so it’s useful to keep it around.


In this part of the series, I looked at the topic of quantization of 2D images for the purpose of storing them at limited bit depth. I analyzed looks and effects of white noise, ordered Bayer pattern matrices, interleaved gradient noise and blue noise.

In next part of the series (coming soon), we will have a look at the topic of dithering in more complicated (but also very common) scenario – uniform sampling. It is slightly different, because often requirements are different. For example if you consider rotations, values of 0 and 2pi will “wrap” and be identical – therefore we should adjust our noise distribution generation error metric for this purpose. Also, for most sampling topics we will need to consider more that 1 value of noise.

Blog post mini-series index.

Edit 10/31/2016: Fixed triangular noise remapping to work in -1 – 1 range instead of -0.5-1.5. Special thanks to Romain Guy and Mikkel Gjoel for pointing out the error.

References “Banding in games”, Mikkel Gjoel C. A. Bouman: Digital Image Processing – January 12, 2015

B. E. Bayer, “An optimum method for two-level rendition of continuous-tone pictures”, IEEE Int. Conf. Commun., Vol. 1, pp. 26-1 1-26-15 (1973). “Next generation post-processing in Call of Duty: Advanced Warfare”, Jorge Jimenez, Siggraph 2014 “Advances in Real Time Rendering in games”

“Blue-noise Dithered Sampling”, Iliyan Georgiev and Marcos Fajardo “Quick And Easy GPU Random Numbers In D3D11”, Nathan Reed

Posted in Code / Graphics | Tagged , , , , , , , , , , , , | 7 Comments

Dithering part two – golden ratio sequence, blue noise and highpass-and-remap

In previous part of the mini-series I covered dithering definition and how dithering changes error characteristics of simple 1D quantization and functions.

In this part I will try to look at what blue noise is, but first wanted to have a look at a number sequence that I used in the previous post and I find very useful.

You can find a Mathematica notebook for golden sequence here and its pdf version here.

For the second part of the post you can find the notebook here and its pdf version here.

Golden ratio sequence

In previous post I used “some” quasi-random function / sequence and mentioned that it’s not perfect, but very useful. The sequence is a sequence made of fractional part of next multiplications of a golden number.


I found idea for using it in a paper “Golden Ratio Sequences For Low-Discrepancy Sampling” by Colas Schretter and Leif Kobbelt.

This is incredibly fascinating sequence, as it seems to distribute next values very well and quite far apart:


The differences between next elements are in modulus 1:

{0.381966, 0.618034, 0.381966, 0.381966, 0.618034, 0.381966, 0.618034, 0.381966}

So oscillating golden number modulo 1 itself and 2 minus golden number modulo 1. Both numbers are distant enough from zero and one to produce well-distributed sequence where next samples add lots of information.

Edit: Mikkel GjÞl observed that modulo/toroidal minimum distance of 0.618034 is 0.381966, so the sequence becomes:

{0.381966, 0.381966, 0.381966, 0.381966, 0.381966, 0.381966, 0.381966, 0.381966}

I find it a remarkably beautiful property!

Even for small number of “samples” in the sequence, they cover whole 0-1 range very well:


Numbers plotted as colors also look “pleasant”:


If we look at its periodogram:


We also find some fascinating properties. First of all, energy seems to increase with frequencies. There are visible “spikes” in some frequencies and what is even more interesting is that every next spike happens at a frequency that is golden ratio times higher and it has golden ratio times more energy! I don’t have any explanation for it… so if you are better than me at maths, please contribute with a comment!

This frequency characteristic is extremely useful, however doesn’t satisfy all of our dithering needs. Why? Imagine that our source signal that we are dithering contains same frequencies. Then we would see extra aliasing in those frequencies. Any structure in noise used for dithering can become visible, and can produce undesired aliasing.

I still find this sequence extremely valuable and use it heavily in e.g. temporal techniques (as well as hemispherical sampling). There are other very useful low-discrepancy sequences that are heavily used in rendering – I will not cover those here, instead reference the physically based rendering “bible” – “Physically Based Rendering, Third Edition: From Theory to Implementation” by Matt Pharr, Wenzel Jakob and Greg Humphreys and chapter 7 that authors were kind enough to provide for free!

For now let’s look at blue noise and theoretically “perfect” dithering sequences.

Blue noise

What is blue noise? Wikipedia defines it as:

Blue noise is also called azure noise. Blue noise’s power density increases 3 dB per octave with increasing frequency (density proportional to f ) over a finite frequency range. In computer graphics, the term “blue noise” is sometimes used more loosely as any noise with minimal low frequency components and no concentrated spikes in energy.

And we will use here this more liberal definition (with no strict definition of frequency distribution density increase).

We immediately see that previous golden ratio sequence is not blue noise, as it has lots of visible spikes in spectrum. Perfect blue noise has no spikes and therefore is not prone to aliasing / amplifying those frequencies.

There are many algorithms for generating blue noise, unfortunately many of them heavily patented. We will have a look at 2 relatively simple techniques that can be used to approximate blue noise.

Generating blue noise – highpass and remap

The first technique we will have a look at comes from Timothy Lottes and his AMD GPUOpen blog post “Fine art of film grain”.

The technique is simple, but brilliant – in step one let’s take a noise with undesired frequency spectrum and just reshape it by applying high pass filter.

Unfortunately, arbitrary high pass filter will produce a signal with very uneven histogram and completely different value range than original noise distribution:


After arbitrary highpass operation of random noise originally in 0-1 range.

This is where part 2 of the algorithm comes in. Remapping histogram to force it to be in 0-1 range! Algorithm is simple – sort all elements by value and then remap the value to position in the list.

Effect is much better:


Unfortunately, histogram remapping operation also changes the frequency spectrum. This is inevitable, as histogram remapping changes relative value of elements not linearly. Values in middle of the histogram (corresponding to areas that originally had lost of low frequency component) will be changed much more than values in areas with high frequency content. This way part of high-pass filtering effect is lost:


Comparison of histogram before (red) and after (black) the remap, renormalized manually. Note how some low frequency component reappeared.

Still, effect looks pretty good compared to no high pass filtering:


Top – regular random noise. Bottom – with high pass and histogram remapping.

Its frequency spectrum also looks promising:


However, there is still this trailing low pass component. It doesn’t contain lots of energy, but still can introduce some visible low pass error in dithered image…

What we can do is to re-apply the technique again!

This is what we get:


Frequency spectrum definitely looks better and whole algorithm is very cheap so we can apply it as many times as we need.

Unfortunately, no matter how many times we will reapply it, it’s impossible to “fix” all possible problematic spots.

I think about it this way – if some area of the image contains only very low frequency, after applying highpass filter, it will get few adjacent values that are very close to zero. After histogram remapping, they will get remapped to again similar, adjacent values.


Small part of a sequence with a local minimum that algorithm repeated even 10 times cannot get out of. Notice few areas of almost uniform gray.

It’s possible that using a different high pass filter or adding some noise between iterations or detecting those problematic areas and “fixing” them would help – but it’s beyond scope of this post and the original technique.

What is worth noting is that original algorithm gives sequence that is not perfect, but often “good enough” – it leaves quite bad local spots, but optimizes frequency spectrum globally .Let’s check it in action.


Let’s have a look at our initial, simple 1D dithering for binary quantization:


Rows 1, 3, 5 – original sine function. Row 2 – dithering with regular noise. Row 4 – dithering with golden ratio sequence. Row 6 – dithering with “highpass and remap” blue-noise-like sequence.

We can see that both golden ratio sequence and our highpass and remap are better than regular noise. However it seems like golden ratio sequence performs better here due to less “clumping”. You can see though some frequency “beating” corresponding to peak frequencies there:


Black – white noise. Red – golden ratio sequence. Green – highpass and remap noise sequence.

So this is not a perfect technique, but a) very fast b) tweakable and c) way better than any kind of white noise.

Better? Slower blue noise

Ok, what could we do if we wanted some solution that doesn’t contain those local “clumps”? We can have a look at Siggraph 2016 paper “Blue-noise Dithered Sampling” by Iliyan Georgiev and Marcos Fajardo from Solid Angle.

The algorithm is built around the idea of using probabilistic technique of simulated annealing to globally minimize desired error metric (in this case distance between adjacent elements).

I implemented a simple (not exactly simulated annealing; more like a random walk) and pretty slow version supporting 1, 2 and 3D arrays with wrapping:

As usually with probabilistic global optimization techniques, it can take pretty damn long! I was playing a bit with my naive implementation for 3D arrays and on 3yo MacBook after a night running it converge to at best average quality sequence. However, this post is not about the algorithm itself (which is great and quite simple to implement), but about the dithering and noise.

For the purpose of this post, I generated a 2000 elements, 1D sequence using my implementation.

This is a plot of first 64 elements:


Looks pretty good! No clumping, pretty good distribution.

Frequency spectrum also looks very good and like desired blue noise (almost linear energy increase with frequency)!


If we compare it with frequency spectrum of “highpass and remap”, they are not that different; slightly less very low frequencies and much more of desired very high frequencies:


Highpass and remap (black) vs Solid Angle technique (red).

We can see it compared with all other techniques when applied to 1D signal dithering:


Every odd row is “ground truth”. Even rows: white noise, golden ratio sequence, highpass and remap and finally generated sequence blue noise.

It seems to me to be perceptually best and most uniform (on par with golden ratio sequence).

We can have a look at frequency spectrum of error of those:


Black – white noise. Red – golden ratio sequence. Green – highpass and remap. Yellow – generated sequence.

If we blur resulting image, it starts to look quite close to original simple sine signal:


If I was to rate them under this constraints / scenario, I would probably use order from best to worst:

  • Golden ratio sequence,
  • Blue noise generated by Solid Angle technique,
  • Blue noise generated by highpass and remap,
  • White noise.

But while it may seem that golden ratio sequence is “best”, we also got lucky here, as our error didn’t alias/”resonate” with frequencies present in this sequence, so it wouldn’t be necessarily best case for any scenario.


In this part of blog post mini-series I mentioned blue noise definition, referenced/presented 2 techniques of generating blue noise and one of many general purpose high-frequency low-discrepancy sampling sequences. This was all still in 1D domain, so in the next post we will have a look at how those principles can be applied to dithering of a quantization of 2D signal – like an image.

Blog post mini-series index.

References “Golden Ratio Sequences For Low-Discrepancy Sampling”, Colas Schretter and Leif Kobbelt

“Physically Based Rendering, Third Edition: From Theory to Implementation”, Matt Pharr, Wenzel Jakob and Greg Humphreys. “Fine art of film grain”, Timothy Lottes

“Blue-noise Dithered Sampling”, Iliyan Georgiev and Marcos Fajardo

Posted in Code / Graphics | Tagged , , , , , , , | 10 Comments

Dithering part one – simple quantization


First part of this mini-series will focus on more theoretical side of dithering -some history and applying it for 1D signals and to quantization. I will try to do some frequency analysis of errors of quantization and how dithering helps them. It is mostly theoretical, so if you are interested in more practical applications, be sure to check the index and other parts.

You can find Mathematica notebook to reproduce results here and the pdf version here.

What is dithering?

Dithering can be defined as intentional / deliberate adding of some noise to signal to prevent large-scale / low resolution errors that come from quantization or undersampling.

If you have ever worked with either:

  • Audio signals,
  • 90s palletized image file formats.

You must have for sure encountered dithering options that by adding some noise and small-resolution artifacts “magically” improved quality of audio files or saved images.

However, I found on Wikipedia quite an amazing fact about when dithering was first defined and used:

[O]ne of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb “didderen,” meaning “to tremble.” Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word.

— Ken Pohlmann, Principles of Digital Audio
This is inspiring and interesting historical fact and as I understand it that it works by avoiding bias in computations and resonances by randomly breaking up some mechanical vibration feedback loops.
But history aside, let’s look at the dithering process for 1D signals first, like audio.

Dithering quantization of a constant signal

We will start first with analyzing the most boring possible signal – a constant signal. If you know a bit about audio and audio-related DSP, you might ask – but you promised looking at audio and and audio by definition cannot have a constant term! (furthermore, both audio software and hardware deliberately remove so called DC offset)
That’s true and we will have a look at more complicated functions in a second, but first things first.
Imagine that we are doing a 1 bit quantization of a normalized floating point signal. This means we will be dealing with final binary values, 0 or 1.
If our signal is 0.3, simple rounding without any dithering will be the most boring function ever – just zero!
Error is also constant, 0.3 and therefore average is also 0.3. This means that we introduced quite big bias to our signal and completely lost original signal information.
We can try to dither this signal and have a look at results.
Dithering in this case (used rounding function) is applying just plain, random white noise (random value per every element, producing uniform noise spectrum) and adding random value from range (-0.5, 0.5) to the signal prior to quantization.

quantizedDitheredSignal =
  Round[constantSignalValue + RandomReal[] – 0.5] & /@ Range[sampleCount];

It’s difficult to see anything here, just that now result of quantization is some random ones and zeros… With (as hopefully expected) more zeros. It’s not terribly interesting signal on its own, but what is quite interesting is the plot of the error and average error.
Ok, we can see that as expected, error is also alternating… but what is quite scary is that error got sometimes bigger (0.7 absolute value)! So our maximum error is worse, pretty unfortunate… however, average noise is:


Much much smaller than original error of 0.3. With sufficiently large amount of samples this error would go towards zero (limit). So error for constant term got much smaller, but let’s have a look at frequency plot of all errors.


Red plot/spike = frequency spectrum of error when not using dithering (constant, no frequencies). Black – with white noise dithering.

Things are getting more interesting! This shows first major takeaway of this post – dithering distributes quantization error / bias among many frequencies.
We will have a look in the next section how it helps us.

Frequency sensitivity and low-pass filtering

So far we observed that dithering a quantized constant signal:
  • Increased maximal error.
  • Almost zeroed average, mean error.
  • Added constant white noise (full spectral coverage) to the error frequency spectrum, reducing the low-frequency error.
By itself it doesn’t help us too much… However, we are not looking at quantization of any arbitrary mathematical function / signal. We are looking here at signals that will be perceived by humans. Human perception is obviously limited, some examples:
  • Our vision has a limit of acuity. Lots of people are short-sighted and see blurred image of faraway objects without correction glasses.
  • We perceive medium scale of detail much better than very high or very low frequencies (small details of very smooth gradients may be not noticed).
  • Our hearing works in specific frequency range (20Hz -20kHz, but it gets worse with age) and we are most sensitive to middle ranges – 2kHz-5kHz.

Therefore, any error in frequencies closer to upper range of perceived frequency will be much less visible.

Furthermore, our media devices are getting better and better and provide lots of oversampling. In TVs and monitors we have “retina”-style and 4K displays (where it’s impossible to see single pixels), in audio we use at least 44kHz sampling file formats even for cheap speakers that often can’t reproduce more than 5-10kHz.

This means, that we can approximate perceptual look of a signal by low-pass filtering it. Here I did a low pass filtering (padding with zeros on the left -> “ramp up”):


Red – desired non-quantized signal. Green – quantized and dithered signal. Blue – low pass filter of that signal.

Signal starts to look much closer to original, unquantized function!
Unfortunately we started to see some low frequencies that are very visible and were not present in the original signal. We will look at fixing it by using blue noise in part 3 of the series, but for now this is how it could look like with some quasi-noise function that has much less lower frequency content:
This is possible because our quasi-random sequence has following frequency spectrum:
But for now enough looking at simplistic, constant function. Let’s have a look at a sine wave (and if you know Fourier theorem – a building block of any periodic signal!).

Quantizing a sine wave

If we quantize a sine wave with 1 bit quantization, we get a simple… square wave.
Square wave is quite interesting, as it comprises base frequency as well as odd harmonics.
It is interesting property that is used heavily in analog subtractive synthesizers to get hollow/brassy sounding instruments. Subtractive synthesis starts with a complex, harmonically rich sound and filters it by removing some frequencies (with filter parameters varying over time) to shape sounds in desired way.
Square wave frequency spectrum:
But in this post we are more interested in quantization errors! Let’s plot the error as well as frequency spectrum of the error:
In this case we are in much better situation – average error is close to zero! Unfortunately, we still have lots of undesired low frequencies, very close to our base frequency (odd multiplies with decreasing magnitudes). This is known as aliasing or dithering noise – frequencies that were not present in the original signal appear and they have pretty large magnitudes.
Even low-pass filtering cannot help this signal much… As error has so many low frequencies:


Low-pass filtered quantized sine


Low-pass filtered quantized sine error

Let’s have a look at how this changes with dithering. At first sight things don’t improve a lot:
However if we display it as an image, it starts to look better:
And notice how again, quantization error gets distributed among different frequencies:
This looks very promising! Especially considering that we can try to filter it now:
That’s slightly crooked sine, but looks much closer to original one than the non-dithered one with the exception of a phase shift introduced by asymmetrical filter (I am not going to describe or cover it here; it is fixable simply by applying symmetrical filters):


Red – original sine. Green – low pass filtered undithered signal. Blue – low pass filtered dithered signal.

Plotting both error functions confirms numerically that error is much smaller:


Red – error of low-pass filtered non-dithered signal. Blue – error of low-pass filterer dithered signal.

Finally, let’s just quickly look at a signal with better dithering function containing primarily high frequencies:


Upper image – white noise function. Lower image – a function containing more higher frequencies.


Low-pass filtered version dithered with a better function – almost perfect results if we don’t count filter phase shift!

And finally – all 3 comparisons of error spectra:


Red – undithered quantized error spectrum. Black – white noise dithered quantized error spectrum. Blue – noise with higher frequencies ditherer error spectrum.


This is end of part one. Main takeaways are:
  • Dithering distributes quantization error / bias among many different frequencies that depend on the dithering function instead of having them focused in lower frequency area.
  • Human perception of any signal (sound, vision) works best in very specific frequency ranges. Signals are often over-sampled for end of perception spectrum in which perception is almost marginal. For example common audio sampling rates allow reproducing signals that most adults will not be able to hear at all. This makes use of dithering and trying to shift error into this frequency range so attractive because of previous point.
  • Different noise functions produce different spectra of error that can be used knowing which error spectrum is more desired.

In the next part we will have a look at various dithering functions – the one I used here (golden ratio sequence) and blue noise.

Blog post mini-series index.

Posted in Code / Graphics | Tagged , , , , , | 2 Comments

Dithering in games – mini series

This an opening post of mini blog post series about various uses of dithering for quantization and sampling in video games. It is something most of us use intuitively in every day work, so wanted to write down some of those concepts, explain and analyze them in Mathematica.

This post is just a table of contents.

Part one – dithering – definition and simple 1D quantization.

Part two – dithering – golden ratio sequence, white and blue noise for 1D quantization.

Part three – dithering in quantization of 2D images, Bayer matrix, interleaved gradient noise and blue noise.

Update April 2020: I didn’t expect to come back to blue noise a few years later, but I wrote a small additional blog post on alternative method for generating blue noise patterns for dithering by “optimizing” frequency spectrum. You can treat it as part three-point-five. 🙂

Update April 2021: I described and posted a tiny, very fast and very readable Python Jax/numpy implementation of the void and cluster algorithm. You can treat it as part three-point-seventy-five. 🙂

You can find some supplemental material for posts here:

Posted in Code / Graphics | Tagged , , , , , , , | 7 Comments

Short names are short


This blog post is a counter-argument and response to a post by Bob Nystrom that got very popular few months ago and was often re-shared. I disagree so much that I thought it’s worth sharing my opinion on this topic.
Mine response here is very subjective, and contains opinions about quite a polarizing topic – if you want to read a pure facts / knowledge sharing post, you may as well stop reading it. This post is supposed to be short, fun and a bit provocative.
Also, I don’t think that single piece of advice “use shorter names” or “use longer names” without any context has any real value, so at the end instead of just ranting I will try to summarize my recommendations.
Having said that and if you want to read some rant, let’s go!

What I agree with

Some personal story here – I used to write terrible, hacked code that now I am ashamed of. Fortunately, working at Ubisoft with colleagues with much higher standards and some discussion with them helped me understand clean code value. Working on one of codebases that they have cleaned up I was amazed how pleasant it can be to work with clear, good quality and well architected code and it made me realize my wrongdoings!
Today I believe that readability and simplicity are the most important features of good code (assuming that it’s correct, performs good enough etc.) to the point that I would always keep clear reference implementation until it’s really necessary to do magic, unclear optimization(s). You write code once, but you and your colleagues read it thousands of times and for years etc.
I definitely agree with Bob Nystrom that names should be:
However, this is I guess where points that we could agree upon end. I don’t think that short names for variables or functions serve readability at all!

Understandability without context

Imagine that you debug a crash or weird functionality in a piece of code you are not familiar with. You start unwinding a callstack and see a variable named “elements” or looking at author’s example, “strawberries”.
What the hell is that? What are those elements? Is it local, temp array, or a member of debugged class? Wouldn’t tempCollectedStrawberiesToGarnish or m_garnishedStrawberies be more readable?
If you look at a crashdump on some crash aggregation website server and before downloading it and opening in IDE, it gets even more difficult! You will have completely no idea what given code does.
And this is not uncommon scenario – we work in teams and our teammates will debug our code many times. In a way, we are writing our code for them…

Confusion and ambiguity

Second thing – I don’t believe that short names can be precise. Working in the past with some component based game engines and seeing “m_enabled” being checked / set with some extra logic around, I just wanted to face-palm. Fortunately people were not crazy enough to skip the member prefix and just operate on some “enabled” variable in long function – this would lead to even more / extreme confusion!
What does it mean that component is enabled / disabled? I guess that animation component is not updating skeleton / hierarchy (or is it?) and mesh component is probably not rendered, but how can I be sure? Wouldn’t be m_isSkeletonUpdated or m_pauseTick or m_isVisible more readable?
Side note: this point could be an argument against as well general class polymorphism / inheritance and reusing same fields for even slightly different functionalities.

Less context switching

With slightly longer and more verbose names, it is easier to keep all information on one screen and within single mental “context”. The shorter the name, the larger and less obvious context you need to understand its purpose, role, lifetime and type.
If you need to constantly remind yourself of class name or variable type, or even worse need to check some name/type of parent class (again, not a problem if you don’t (ab)use OOP) the less effective and more distracted you are. In long, complex code this context switching can be detrimental to code understanding and make you less focused.


This is probably my biggest problem with short names and a biggest NO. I use grep-like tools all the time and find them better and more robust than any IDE-specific symbol searching. Don’t get me wrong! For example VisualAssistX is an amazing extension and I use it all the time. It’s way faster than IntelliSense, however still can choke on very large code solutions – but this is not the main issue.
The main issue is that every codebase I worked in (and I guess any other serious and large codebase) has many different languages. I work daily with C/C++, HLSL/PSSL, Python, JSON, some form of makefiles and custom data definition languages. To look for some data that can be in either of those places (sometimes in many!) I use good, old “search in files”. I can recommend here a plugin called Entrian Source Search (colleague from Ubisoft, Benjamin Rouveyrol recommended it to me and it completely transformed the way I work!) and it perfectly solves this problem. I can easily look for “*BRDF*” or “SpecularF0” and be sure that I’ll find all HLSL, C++ and data definitions references.
Going back to the main topic – this is where short, ambiguous names completely fail! If you find 1000 references or given name then it could be considered just useless.
Just some examples.
Let’s look for “brilliantly” named variable or function enable – why would anyone need more context?
Note that this shows only whole words matching! Hmm, not good… How about “brilliant” short name update?
Good luck with checking if anyone uses your class “update” function!

Help with refactoring

Related to the previous topic – before starting refactoring, I always rely on code search. It’s really useful to locate it where given variable / function / class is being used, why, can it be removed?
Large codebases are often split into many sub-solutions to be lighter on memory (if you worked in a huge one in Visual Studio you know the pain!) and more useful for daily use / common use scenario. This is where IDE symbol search fails completely and can be an obstacle in any refactoring.
Again – in my personal opinion and experience, using straight grep-like search works much better than any flaky symbol search, and works across many code solutions and languages. Good, uncommon, clear and unique names really help it. I can immediately see when a function or variable is not used anywhere, who uses it, which parts of the pipeline need information about it. So basically – all necessary steps in planning a good refactor!

Risks with refactoring

This point is a mix of previous one and section about confusion / ambiguity. It is a) very easy to misunderstand code with common names b) overload the original term, since it’s short and seems kind-of still accurate. This leads often to even more meaningless and confusing terms.
If you have a longer, more verbose name then you will think twice before changing its type or application – and hopefully rename it and/or reconsider/remove all prior uses.

Self-documenting code

I believe in use of comments do document “non-obvious” parts of an algorithm or some assumptions (much better than offline documents that get sent in emails and lost or are put at some always-outdated wiki pages), but hate code constantly interleaved with short comments – for me because of context switching it increases cognitive load. Code should be self-documenting to some extent and I think it’s possible – as long as you don’t try to remove whole context information from variable names.

When using short names is ok?

Ok, I listed hopefully enough arguments against using short, context-less names – but I sometimes use them myself!
As I said at the beginning – it all depends and “single advice fits all” attitude is usually piece of crap.
My guidelines would be – use rather long, unique and memorable names, giving enough information about the context, unless:
  1. It’s something like iterator in a short loop. I use C-style names like i, j, k quite often and don’t see it as a major problem provided that the loop is really short and simple.
  2. In general, it’s a local variable in a short function with clear use. So think about it this way – if someone else “lands” there accidentally with a debugger, would they require understanding of the system to figure out its purpose?
  3. If they are class or function names, only if they are guaranteed to be local in the scope of a file and not accessible from the outside. If you change such function from static to global, make sure you change the name as well!
  4. It is a part of convention that all programmers in the codebase agreed upon. For example that your physics simulation classes will start with a “P”, not “Physics”.
  5. You use POD types like structs and never write functions inside them – it’s fine to have their names short as you know they relate to the struct type.
  6. Similar to 5 – you use (not abuse!) namespaces and/or static class functions to provide this extra scope and information and always access it with a prefix (no “using” so search-ability is not impacted by it).


Rant mode over! 🙂 I hope that this provocative post shown some disadvantages of over-simplified pieces of advice like “long names are long” and some benefits of slightly increased verbosity.
Posted in Code / Graphics | Tagged , , , , , | 1 Comment

Image dynamic range


This post is a second part of my mini-series about dynamic range in games. In this part I would like to talk a bit about dynamic range, contrast/gamma and viewing conditions.

You can find the other post in the series here and it’s about challenges that lay ahead of properly exposing a scene with too large luminance dynamic range – I recommend checking it out! 🙂

This post is accompanied by Mathematica Notebook so you can explore those topics yourself.

.nb Matematica version

PDF version of the notebook

Dynamic range

So, what is dynamic range? It is most simply a ratio between highest possible value a medium can reproduce / represent and lowest one. It is measured usually in literally “contrast ratio” (a proportion like for example 1500:1), decibels (difference in logarithm in base of 10) or “stops” (or simply… bits; as it is difference in logarithms in base of 2). In analog media, it is represented by “signal to noise ratio”, so lowest values are the ones that cannot be distinguished from noise. In a way, it is analogous to digital media, where under lowest representable value is only quantization noise.

This measure is not very useful on its own and without more information about representation itself. Dynamic range will have different implications on analog film (where there is lots of precision in bright and mid parts of the image and dark parts will quickly show grain), digital sensors (total clipping of whites and lots of information compressed in shadows – but mixed with unpleasant, digital noise) and in digital files (zero noise other than quantization artifacts / banding).

I will focus in this post purely on dynamic range of the scene displayed on the screen. Therefore, I will use EV stops = exposure value stops, base2 logarithm (bits!).

Display dynamic range

It would be easy to assume that if we output images in 8 bits, dynamic range of displayed image would be 8 EV stops. This is obviously not true, as information stored there is always (in any modern pipeline and OS) treated with an opto-electrical transfer function – OETF prior to 8bit quantization/encoding that is given for a display medium. Typically used OETF is some gamma operator, for example gamma 2.2 and sRGB (not the same, but more about it later).

Before going there, let’s find a way of analyzing it and displaying dynamic range – first for the simplest case, no OETF, just plain 8bit storage of 8bit values.

Since 0 is “no signal”, I will use 1/256 as lowest representable signal (anything lower than that gets lost in quantization noise) and 1 as highest representable signal. If you can think of a better way – let me know in comments. To help me with that process, I created a Mathematica notebook.

Here is output showing numerical analysis of dynamic range:


Dynamic range of linear encoding in 8bits. Red line = dynamic range. Blue ticks = output values. Green ticks = stops of exposure. Dots = mappings of input EV stops to output linear values.

Representation of EV stops in linear space is obviously exponential in nature and you can immediately see a problem with such naive encoding – many lower stops of exposure get “squished” in darks, while last 2 stops cover 3/4 of the range! Such linear encoding is extremely wasteful for logarithmic in nature signal and would result in quite large, unpleasant banding after quantization. Such “linear” values are unfortunately not perceptually linear. This is where gamma comes in, but before we proceed to analyze how it affects the dynamic range, let’s gave a look at some operations in EV / logarithmic space.

Exposure in EV space

First operation in EV space is trivial, it’s adjusting the exposure. Quite obviously, addition in logarithmic space is multiplication in linear space.

exp2(x+y) == exp2(x)*exp2(y)

Exposure operation does not modify dynamic range! It just shifts it in the EV scale. Let’s have a look:


Blue lines scale linearly, while green lines just shift – as expected.


Underexposed image – dynamic range covering too high values, shifted too far right.

Gamma in EV space

This is where things start to get interesting as most people don’t have intuition about logarithmic spaces – at least I didn’t have. Gamma is usually defined as a simple power function in linear space. What is interesting though is what happens when you try to convert it to EV space:

gamma(x,y) = pow(x, y)

log2(gamma(exp2(x),y)) == log2(exp2(x*y))

log2(exp2(x*y)) == x*y

Gamma operation becomes simple multiplication! This is a property that is actually used by GPUs to express a power operation through a series of exp2, madd and log2. However, if we stay in EV space, we only need the multiply part of this operation. Multiplication as a linear transform obviously preserves space linearity, just stretching it. Therefore gamma operation is essentially dynamic range compression / expansion operation!


You can see what it’s doing to original 8 bit dynamic range of 8 stops – it multiplies it by reverse of the gamma exponent.


Gamma is a very useful operator, but when it increases dynamic range, it makes all values brighter than before; when decreasing dynamic range, it makes all of them darker. This comes from anchoring at zero (as 0 multiplied by any value will stay zero), which translates to “maximum” one in linear space. It is usually not desired property – when we talk about contrast, we want to make image more or less contrasty without adjusting overall brightness. We can compensate for it though! Just pick another, different anchor value – for example mid grey value.

Contrast is therefore quite similar to gamma – but we usually want to keep some other point fixed when dynamic range gets squished instead of 1 – for example linear middle gray 0.18.

contrast(x, y, midPoint) = pow(x,y) * midPoint / pow(midPoint, y)


Increasing contrast reduces represented dynamic range while “anchoring” some grey point so it’s value stays unchanged.

How it all applies to local exposure

Hopefully, with this knowledge it’s clear why localized exposure works better to preserve local contrast, saturation and “punchiness” than either contrast or gamma operator – it moves parts of histogram (ones that were too dark / bright), but:

  1. Does it only to the parts of the image / histogram that artist wanted, not affecting others like e.g. existing midtones.
  2. Only shifts values (though movement range can vary!), instead of squishing / rescaling everything like gamma would do.

This property and contrast preservation is highly desired and preserves other properties as well – like saturation and “punchiness”.


Comparison of gamma vs localized adaptive exposure – notice saturation and contrast difference.


Before talking about output functions, it’s important to distinguish between a function that’s used by display devices to transfer from electric signals (digital or analog), called EOTF – Electro Optical Transfer Function and the inverse of it, OETF, Opto-Electrical Transfer Function.

EOTF came a long way – from CRT displays and their power response to voltage that was seen as perceptually uniform in some viewing conditions.


CRT response to voltage

While early rendering pipelines completely ignored gamma and happily used its perceptual linearity property and did output values directly to be raised to a gamma power by monitor, when starting to get more “serious” about things like working in linear spaces, we started to use inverse of this function, called OETF and this is what we use to encode signal for display in e.g. sRGB or Rec709 (and many more other curves like PQ).

While we don’t use CRTs anymore, such power encoding was proven to be very useful also on modern displays as it allows to encode more signal and this is why we still use them. There is even standard called BT1886 (it specifies only EOTF) that is designed to emulate gamma power look of old CRTs!

Monitor OETF gamma 2.2 encoding

Equipped with that knowledge, we can get back to monitor 2.2 gamma. As EOTF is 2.2, OETF will be a gamma function of 1/2.2, ~0.45 and hopefully it’s now obvious that encoding from linear space to gamma space, we will have a dynamic range 2.2 * 8 == 17.6.


Now someone would ask – ok, hold on, I thought that monitors use a sRGB function, that is combination of a small, linear part and an exponent of 2.4!

Why I didn’t use precise sRGB? Because this function was designed to deliver average gamma 2.2, but have added linear part in smallest range to avoid numerical precision issues. So in this post instead of sRGB, I will be using gamma 2.2 to make things simpler.

What is worth noting here is that just gamma 2.2 of linear value encoded in 8 bits has huge dynamic range! I mean, potential to encode over 17 stops of exposure / linear values – should be enough? Especially that their distribution also gets better – note how more uniform is coverage of y axis (blue “ticks”) in the upper part. It still is not perfect, as it packs many more EV stops into shadows and puts most of the encoding range to upper values (instead of midtones), but we will fix it in a second.

Does this mean that OETF of gamma 2.2 allow TVs and monitors to display those 17.6 stops of exposure and we don’t need any HDR displays nor localized tonemapping? Unfortunately not. Just think about following question – how much information is really there between 1/256 and 2/256? How much those 2.2 EV stops mean to the observer? Quite possibly they won’t be even noticed! This is because displays have their own dynamic range (often expressed as contrast ratio) and depend on viewing conditions. Presentation by Timothy Lottes explained it very well.

But before I proceed with further analysis of how tonemapping curves change it and what gamma really does, I performed a small experiment with viewing conditions (let you be the judge if results are interesting and worth anything).

Viewing conditions

I wanted to do small experiment – compare an iPhone photo of the original image in my post about localized tonemapping (so the one with no localized tonemapping/exposure/contrast) on my laptop in perfect viewing conditions (isolated room with curtains):

2016-08-21 13.31.05.jpg

Poor dynamic range of the phone doesn’t really capture it, but it looked pretty good – with all the details preserved and lots of information in shadows. Interesting, HDR scene and shadows that look really dark, but full of detail and bright, punchy highlights. In such perfect viewing conditions, I really didn’t need any adjustments and the image and display looked “HDR” by themselves!

Second photo is in average/typical viewing conditions:

2016-08-21 13.29.59.jpg

Whole image was still clear and readable, though what could be not visible here is that darks lost details; Image looks more contrasty (Bartleson-Breneman effect) and Kratos back was unreadable anymore.

Finally, same image taken outside (bright Californian sun around noon, but laptop maxed out on brightness!):

2016-08-21 13.30.31.jpg

Yeah, almost invisible. This is with brightness adjustments (close to perception of very bright outdoor scene):

2016-08-21 13.30.31_c.jpg

Not only image is barely visible and lost all the details, but also is overpowered with reflections of surroundings. Almost no gamma settings would be able to help such dynamic range and brightness.

I am not sure whether this artificial experiment shows it clearly enough, but depending on the viewing conditions image can look great and clear or barely visible at all.

My point is to emphasize what Timothy Lottes shown in his presentation – viewing conditions define the perceived dynamic range.

I tried to model it numerically – imagine that due to bright room and sunlight, you cannot distinguish anything below encoded 0.25. This is how dynamic range of such scene would look like:


Not good at all, only 4.4 EV stops! And this is one of reasons why we might need some localized tonemapping.

Viewing conditions of EOTF / OETF

Viewing conditions and perception is also the reason for various gamma EOTF curves and their inverse. Rec709 EOTF transfer curve is different than sRGB transfer curve (average gamma of 2.4 vs 2.2) because of that – darker viewing conditions of HDTV require different contrast and reproduction.

Due to mentioned Bartleson-Breneman effect (we perceive more contrast, less dynamic range as surroundings get brighter) living room at night requires different EOTF than one for view conditions web content in standard office space (sRGB). Rec709 EOTF gamma will mean more contrast produced by the output device.

Therefore, using inverse of that function, OETF of Rec709 you can store 2.4*8 == 19.2 stops of exposure and the TV is supposed to display them and conditions are supposed to be good enough. This is obviously not always the case and if you tried to play console games in sunlit living room you know what I mean.


Gammas 2.2 and 2.4 aren’t only ones used – before standardizing sRGB and Rec709 different software and hardware manufacturers were using different values! You can find historical references to Apple Mac gammas 1.8 or 2.0 or crazy Gameboy extreme gammas of 3.0 or 4.0 (I don’t really understand this choice for Gameboys that are handheld and supposed to work in various conditions – if you know, let me and readers know in comments!).

“Adjust gamma until logo is barely visible”

Varying conditions are main reason for what you see in most games (especially on consoles!):


Sometimes it has no name and is called just a “slider”, sometimes gamma, sometimes contrast / brightness (incorrectly!), but it’s essentially way of correcting for imperfect viewing conditions. This is not the same as display gamma! It is an extra gamma that is used to reduce (or increase) the contrast for the viewing conditions.

It is about packing more EV stops into upper and midrange of the histogram (scaling and shifting right), more than expected by display before it turns it into linear values. So a contrast reduction / dynamic range compression operation as in brighter viewing conditions, viewer will perceive more contrast anyway.

It often is “good enough” but what in my opinion games usually do poorly is:

  1. Often user sets it only once, but plays the game during different conditions! Set once, but then play same gam during day/night/night with lights on.
  2. Purpose of it is not communicated well. I don’t expect full lecture on the topic, but at least explanation that it depends on brightness of the room where content is going to be viewed.

Let’s take our original “poor viewing conditions” graph and apply first 0.7 gamma (note: this additional gamma has nothing to do with the display EOTF! This is our “rendering intent” and knowledge about viewing conditions that are not part of the standard) on the linear signal before applying output transform, so:

(x^0.7)^(1/2.2) == x^(0.7/2.2)


A little bit better, ~1.42 more stops. 🙂 However, in good viewing conditions (like player of your game launching it at night), this will make image look less contrasty and much worse…

Tonemapping operators

From my analysis so far it doesn’t seem immediately obvious why we need some tonemapping curves / operators – after all we get relatively large dynamic range just with straight linear encoding in proper viewing conditions, right?

There are two problems:

  1. First one is non-uniform distribution of data, biased towards last visible EV stops (values close to white) occupying most of the encoded range. We would probably want to have store most EV stops around middle values and larger range and have upper and lower range squishing many EV stops with smaller precision. So a non-uniform dynamic range scaling with most of precision bit range used for midtones.
  2. Fixed cutoff value of 1.o defining such white point. It is highly impractical with HDR rendering, where while after exposure we can have most of values in midrange, we can also have lots of (sometimes extreme!) dynamic range in bright areas. This is true especially with physically based rendering and very glossy speculars or tiny emissive objects with extreme emitted radiance.

Methods presented so far don’t offer any solution for those very bright objects – we don’t want to reduce contrast or shift the exposure and we don’t want them to completely lose their saturation; we still want to perceive some details there.


Correctly exposed image, but with linear tonemapping curve shows the problem – completely clipped brights!

This is where tonemapping curves come to play and show their usefulness. I will show some examples using good, old, Reinhard curve – mostly for its simplicity. By no means I recommend this curve, there are better alternatives!

There is a curve that probably every engine has implemented as a reference – Hable Uncharted tonemapping curve.

Another alternative is adapting either full ACES workflow – or cheaper (but less accurate; ACES RRT and ODT are not just curves and have some e.g. saturation preserving components!) approximating it like Kris Narkowicz proposed.

Finally, option that I personally find very convenient is nicely designed generic filmic tonemapping operator from Timothy Lottes.

I won’t focus here on them – instead try good, old Reinhard in simplest form – just for simplicity. It’s defined as simply:

tonemap(x) = x/(1+x)

It’s a curve that never reaches white point, so usually a rescaling correction factor is used (as division by tonemap(whitePoint) ).

Let’s have a look at Reinhard dynamic range and its distribution:


Effect of Reinhard tonemapping with white point defined as 128

This looks great – we not only gained over 7 stops of exposure, have them tightly packed in brights without affecting midtones and darks too much, but also distribution of EV stops in final encoding looks almost perfect!

It also allows to distinguish (to some extent) details in objects that are way brighter than original value of EV 0 (after exposure) – like emissives, glowing particles, fire, bright specular reflections. Anything that is not “hotter” than 128 will get some distinction and representation.


Comparison of linear and filmic operators. Here filmic tonemapping operator is not Reinhard, so doesn’t wash out shadows, but the effect on highlights is same.

As I mentioned, I don’t recommend this simplest Reinhard for anything other than firefly fighting in post effects or general variance reduction. There are better solutions (already mentioned) and you want to do many more things with your tonemapping – add some perceptual correction for darks contrast/saturation (“toe” of filmic curves), some minor channel crosstalk to prevent total washout of brights etc.


I hope that this post helped a bit with understanding of the dynamic range, exposure and EV logarithmic space, gamma functions and tonemapping. I have shown some graphs and numbers to help not only “visualize” impact of those operations, but also see how they apply numerically.

Perfect understanding of those concepts is in my opinion absolutely crucial for modern rendering pipelines and any future experiments with HDR displays.

If you want to play with this “dynamic range simulator” yourself, check the referenced Mathematica notebook!

.nb Matematica version

PDF version of the notebook

Edit: I would like to thank Jorge Jimenez for a follow up discussion that allowed me to make some aspects of my post regarding EOTF / OETF hopefully slightly clearer and more understandable.

References SIGGRAPH 2010 Course: Color Enhancement and Rendering in Film and Game Production  “Advanced Techniques and Optimization of HDR Color Pipelines”, Timothy Lottes. John Hable “Filmic Tonemapping Operators” Academy Color Encoding Standard ACES filmic tonemapping curve Brian Karis “Tone-mapping”


Posted in Code / Graphics | Tagged , , , , , , , , | 13 Comments

Localized tonemapping – is global exposure and global tonemapping operator enough for video games?

In this blog post, I wanted to address something that I was thinking about for many years – since starting working on rendering in video games and with HDR workflows and having experience with various photographic techniques. For the title of the article, I can immediately give you an answer – “it depends on the content”, and you could stop reading it. 🙂

However, if you are interested in what is localized tonemapping and why it’s an useful tool, I’ll try to demonstrate some related concepts using personal photographs and photographic experiments as well as screenshots from Sony Santa Monica’s upcoming new God of War and the demo we have shown this year at E3.

Note: all screenshots have some extra effects like bloom or film grain turned off – not to confuse and to simplify reasoning about them. They are also from an early milestone, so are not representative of final game quality.

Note 2: This is a mini-series. A second blog post accompanies this one. It covers some notes about dynamic range, gamma operations, tonemapping operators, numerical analysis and notes about viewing conditions.

Global exposure and tonemapping

Before even talking about tonemapping and the need of localized exposure, I will start with simple, global exposure setting.

Let’s start with a difficult scene that has some autoexposure applied:


For whole demo for lighting we used reference physical radiance values for natural light sources (sky, sun) and because of the scene taking place in a cave, you can already see that it has pretty big dynamic range.

I won’t describe here all the details of the used autoexposure system (pretty standard; though I might describe it in future blog post or in some presentation), but due to its center weighted nature it slightly underexposed lots of details in the shade at the bottom of the screen – but outdoors are perfectly exposed. In most viewing conditions (more about it in a second post) shadows completely lost all details, you don’t understand the scene and main hero character is indistinguishable…

Let’s have a look at the histogram for the reference:


Interesting observation here is that even though big area of the image is clearly underexposed, there is still some clipping in whites!

Let’s quickly fix it with an artist-exposed exposure bias:


And have a look at the histogram:


Hmm, details are preserved slightly better in shadows (though they still look very dark – as intended) and the histogram is more uniform, but whole outdoor area is completely blown out.

I have a confession to make now – for the purpose of demonstration I cheated here a bit and used linear tonemapping curve. 🙂 This is definitely not something you would like to do, and for as why there are excellent presentations on the topic. I want to point to two specific ones:

First one is a presentation from a 2010 Siggraph course organized by Naty Hoffman – “From Scene to Screen” by Josh Pines. It describes how filmic curves were constructed and what was the reasoning behind their shape and in general – why we need them.

Second one is from GDC 2016 by Timothy Lottes “Advanced Techniques and Optimization of HDR Color Pipelines” and covers various topics related to displaying High Dynamic Range images.

So, this is the scene with correct, filmic tonemapping curve and some adjustable cross-talk:


And the histogram:


Much better! Quite uniform histogram and no blown whites (in game bloom that is additive would make whites white).

I made a gif of the difference before/after:linear_vs_filmic.gif

And the animation of all three histograms:


Much better! Few extra stops of dynamic range, lots of details and saturation preserved in bright areas. However, histogram still contains many completely black pixels, there is a large spike in lower 10th percentile and depending on the viewing conditions, scene might still be too dark…

Gamma / contrast settings

Why not just reduce the dynamic range of the image? Raising linear pixel values to some power and rescaling are equivalent of logarithmic EV space scaling and shifting and can give result that looks like that:



Definitely all details are visible, but does the scene look well? In my opinion not; everything is milky, washed out, boring and lacks “punch”. Furthermore, both environments and characters started to look chalky. Due to perceptual character of per-channel gamma adjustments, not only contrast, but also saturation is lost. Even histogram shows the problem – almost no whites nor blacks in the scene, everything packed into grays!

Check this gif comparison:


Does this problem happen in real life / photography?

Yes, obviously it does! Here is a photo from Catalina Island – you can notice on the left how camera took an underexposed picture with extremely dark shadows with zero detail – and on the right how I corrected it to feel more perceptually correct and similar to how I saw the scene.


So can we do anything about it? Yes, we can! And I have teased with the photograph above what is the solution.

But first – some question that probably immediately pop up in a discussion with lighting artists.

Why not just change the lighting like they do in movies?

The answer often is – yes, if you can do it right, just adjust lighting in the scene to make dynamic range less problematic. We interviewed recently many lighters coming from animation and VFX and usually as a solution for large scene dynamic range they mention just adding more custom lights and eyeballing it until it looks “good”.

Obviously by using the word right in my answer, I kind of dodged the question. Let me first explain what it means in real-life photography and cinematography with some random youtube tutorials I have found on the topic. It means:


Pieces of some material that partially transmits light, but completely diffuses the directionality and source position of lighting. Equivalent of placing large area light in rendered scene and masking out original light.


Simply  a surface that bounces lighting and boosts lighting on shadowed surfaces and ones not facing the light direction.

In a game scene you could either place white material that would produce more GI / bounced lighting, or simply in a cinematic place a soft (area) fill light.

Additional, artificial light sources (fill flash, set lights etc.)

This is pretty obvious / straightforward. Movie sets and games with cinematic lighting have tons of them.

So as you can see – all those techniques apply to games! With constrained camera, camera cuts, cinematics etc. you can put in there fill lights and area lights and most games (with enough budget) do that all the time.

This is also what is possible in VFX and animation – manually adding more lights, fixing and balancing stuff in composition and just throwing more people at the problem…

Note: I think this is good, acceptable solution and in most cases also the easiest! This is where my answer “it depends” for the question in the title of this article comes from. If you can always add many artificial, off-screen light sources safely and your game content allows for it and you have great lighters – then you don’t need to worry.

Is it always possible in games and can get us good results?

No. If you have a game with no camera cuts or simply need perfect lighting in conditions with 100% free camera, you cannot put invisible light source in the scene and hope it will look well.

On the other hand, one could argue that there is often place for grounded light sources (torch, fire, bulbs… name it) and sure; however as often they can make no sense in the context and level design design of the scene, or you might have no performance budget for them.

Hacking Global Illumination – just don’t!

Some artists would desperately suggest “let’s boost the GI!” “let’s boost bounced lighting!” “let’s boost just the ambient/sky lighting intensity!” and sometimes you could get away with that on previous generation of consoles and non-PBR workflows, but in PBR pipeline, it’s almost impossible not to destroy your scene this way. Why? Large family of problems it creates is balance of specular and diffuse lighting.

If you hack just diffuse lighting component (by boosting the diffuse GI), your objects will look chalky or plastic and your metals will be too dark; other objects sometimes glow in the dark. If you boost also indirect speculars, suddenly under some conditions objects will look oily and/or metallic; your mirror-like glossy surfaces will look weird and lose sense of grounding.

Finally, this is not only GI specific, but applies to any hacks in PBR workflow – when you start hacking sun/sky/GI intensities, you lose ability to quickly reason about material responses and the lighting itself and debugging them – as you can’t trust what you see and many factors can be the source of the problem.

How photography deals with the problem of too large dynamic range when operating with natural light?

This is a very interesting question and my main source of inspiration to the solution of this problem. Especially in the film / analog era, photographers had to know a lot about dynamic range, contrast, various tonemapping curves. Technique and process were highly interleaved with artistic side of photography.

One of (grand)fathers of photography, Ansel Adams created so-called Zone system.–photo-5607

I won’t describe it in detail here, but it is very similar to many principles that we are used to – mapping middle gray, finding scene dynamic range, mapping it to media dynamic range etc.

Fascinating part of it is the chemical/process part of it:

Picking correct film stock (different films have different sensitivity and tonemapping curve shape), correct developer chemical, diluting it (my favourite developer, Rodinal can deliver totally different contrast ratios and film acuity/sharpness depending on the dilution), adjusting development time or even frequency of mixing the developed film (yes! 1 rotation of development tank a minute can produce different results than 1 rotation every 30 seconds!).


Photo captured on Fuji Neopan developed in diluted Agfa Rodinal – absolutely beautiful tonality and low contrast, high acuity film.

Manual localized exposure adjustment

This all is interesting, but also in global tonemapping / per-image, global process domain. What photographers had to do to adjust exposure and contrast locally, was tedious process called dodging and burning.

It meant literally filtering or boosting light during print development. As film negatives had very large dynamic range, it made it possible to not only adjust exposure/brightness, but recover lots of details in otherwise overblown / too dark areas.

An easy alternative that works just great for landscape photography is using graduated filters:

Or even more easily, by using polarizer (darkens and saturates the sky and can cancel out specular light / reflections on e.g. water).

Fortunately in digital era, we can do it much easier with localized adjustment brushes! This is not very interesting process, but it’s extremely simple in software like Adobe Lightroom. Some (stupid) example of manually boosting exposure in shadows:


As localized adjustment brush with exposure is only exposure addition / linear space multiplication (more about it in second post in the series!), it doesn’t affect contrast in modified neighborhood.

It is worth noting here that such adjustment would be probably impossible (or lead to extreme banding / noise) with plain LDR bmp/jpeg images. Fortunately, Adobe Lightroom and Adobe Camera Raw (just like many other similar deticated RAW processing format) operate on RAW files that are able to capture 12-16 exposure stops of dynamic range with proper detail! Think of them as of HDR files (like EXR), just stored in a compressed format and containing data that is specific to the input device transform.

This is not topic of this post, but I think it’s worth mentioning that on God of War we implemented similar possibility for lighting artists – in form of 3D shapes that we called “exposure lights”. Funnily they are not lights at all – just spherical, localized exposure boosters / dimmers. We used dimming possibility in for example first scene of our demo – Kratos reveal, to make him completely invisible in the darkness (there was too much GI 🙂 ) and we use brightness boosting capabilities in many scenes.

Automatic localized adjustments – shadows / highlights

Manual localized exposure adjustments are great, but still – manual. What if we could do it automatically, but without reducing whole image contrast – so:

a) automatically

b) when necessary

c) preserving local contrast?

Seems like Holy Grail of exposure settings, but let’s have a look at tools already at photographers/artists disposal.

Enter… Shadows / Highlights! This is an image manipulation option available in Adobe Photoshop and Lightroom / Camera Raw. Let’s have a look at some image with normal exposure, but lots of bright and dark areas:


We can boost separately shadows:


(notice how bright the trees got – with slight “glow” / “HDR-look” (more about it later).

Now highlights:


Notice more details and saturation in the sky.

And finally, both applied:



What is really interesting, is that it is not a global operator and doesn’t only reshape exposure curve. It’s actually a contrast-preserving, very high quality localized tonemapping operator. Halo artifacts are barely visible (just some minor “glow”)!

Here is an extreme example that hopefully shows those artifacts well (if you cannot see them due to small size – open images in a separate tab):


Interestingly, while ACR/Lightroom HDR algorithm seems to work great until pushed to the extreme, same Shadows/Highlights looks quite ugly in Photoshop in extreme settings:


Aaaargh, my eyes! 🙂 Notice halos and weird, washed out saturation.

Is the reason only less information to work with (bilateral weighting in HDR can easily distinguish between of -10EV vs -8EV while 1/255 vs 2/255 provides almost no context/information?) or a different algorithm – I don’t know.

Actual algorithms used are way beyond scope of this post – and still a topic I am investigating (trying to minimize artifacts for runtime performance and maximize image quality – no halos), but I was playing with two main categories of algorithms:

  • Localized exposure (brightness) adjustments, taking just some neighborhood into account and using bilateral weighting to avoid halos. I would like to thank here our colleagues at Guerilla Games for inspiring us with an example of how to apply it in runtime.
  • Localized histogram stretching / contrast adjustment – methods producing those high structure visibility, oversaturated, “radioactive” pictures.

There are obviously numerous techniques and many publications available – sadly not many of them fit in a video game performance budget.

In “God of War”

Enough talking about photography and Adobe products, time to get back to God of War!

I implemented basic shadows/highlights algorithm with artist tweakable controls and trying to match behavior of Lightroom. First screenshot shows a comparison of “shadows” manipulation and regular, properly tonemapped screenshot with a filmic tonemapping curve.



I set it to some value that is relatively subtle, but still visible (artists would set it from more subtle settings to more pronounced in gameplay-sensitive areas). Now the same with highlights options:



One thing that you might notice here is haloing artifacts – they result from both relatively strong setting as well as some optimizations and limitations of the algorithm (working in lower/partial resolution).

Finally, with both applied:



As I mentioned – here it is shown in slightly exaggerated manner and showing artifacts. However, it’s much better than regular “gamma” low contrast settings:



Histogram shows the difference – while gamma / contrast operator tends to “compact” the dynamic range and pack it all in midtones / grays, shadows/highlights operations preserve local contrast, saturation and some information about darkest and brightest areas of the image.

Why localized exposure preserves contrast and saturation? Main difference is that gamma in logarithmic space becomes scale, scaling whole histogram, while exposure/scale becomes just a linear shift (more about it in part 2) and shifts under / over exposed parts with same histogram shape into visible range.


You can check the final image (a bit more subtle settings) here:


To sum up – I don’t think that problems of exposure and dynamic range in real time rendering are solved. Sometimes scenes rendered using realistic reference values have way too large dynamic range – just like photographs.

We can fix it with complicated adjustments of the lighting (like they do on movie sets), some localized exposure adjustments (in 3D “exposure lights”) or using simple “procedural” image space controls of shadows/highlights.

Possible solutions depends heavily on the scenario. For example – if you can cut the camera, you have many more options than when is it 100% free and not constrained with zero cuts. It also depends how much budget do you have – both in terms of milliseconds to spend on extra lights as well as in terms of lighting artists time.

Sometimes a single slider can make scene look much better and while localized exposure / localized tonemapping can have its own problems, I recommend adding it to your artists’ toolset to make their lives easier!

If you are interested a bit more in the dynamic range, tonemapping and gamma operations, check out my second post in the mini-series.

References SIGGRAPH 2010 Course: Color Enhancement and Rendering in Film and Game Production  “Advanced Techniques and Optimization of HDR Color Pipelines”, Timothy Lottes.–photo-5607



Posted in Code / Graphics, Travel / Photography | Tagged , , , , , , , , | 17 Comments

Technical debt… or technical weight?


Whole idea for this post came from a very inspiring conversation with my friends and ex-colleagues from Ubisoft that we had over a dinner few months ago.

We started to talk about some sophisticated code algorithm – an algorithm that is very well written, brilliant idea, lots of hard work of talented and experienced people to make it robust and perform very well on many target hardware platforms. On the other hand, the algorithm and its implementation are so complex, that it takes almost full time of some people to maintain it. One of my colleagues called it a “technical debt”, which I disagreed with and we started to discuss differences and I came up with a name, “technical weight”.

Quick definition of technical weight would be a property that makes your solutions (very) costly and “heavy” not when implementing them, but in the long run; but in clean and properly designed environment (unlike technical debt).

This is not a post about technical debt

Let me be clear – this is not yet another post about technical debt. I think enough people have covered it and it’s a very well understood problem. I like analogy to a real debt and taking a credit (or even a mortgage) – for some short-term benefit one can avoid hard work of “doing it properly” (analogy of paying with cash) and instead take this “credit”. It can be many things – “hacking”, introducing unnecessary globals and states, ignoring proper data flow and data structures, injecting weird dependencies (like core data logic depending on visual representation), writing unextendable code or sometimes writing just ugly, unreadable and aesthetically unpleasant code.

There are posts about what is technical debt, how to avoid it, how to fix it or even necessary cultural changes within company structure to deal with it (like convincing product owners that something that has no visible value can make an investment that will pay off).

What most post don’t cover is that recently huge amount of technical debt in many codebases comes from shifting to naïve implementations of agile methodologies like Scrum, working sprint to sprint. It’s very hard to do any proper architectural work in such environment and short time and POs usually don’t care about it (it’s not a feature visible to customer / upper management). There are some ways of counter-acting it, like clean-up weeks (great idea given that your team actually cares about working with clean, good code – but fortunately many teams do)

But this is not a post about it. 🙂

Enter the “technical weight”

So what is technical weight then? I wouldn’t call this way a single item – technology / decision / process / outcome; I think of it as a property of every single technical decision you make – from huge architectural decisions through models of medium-sized systems to finally way you write every single line of code. Technical weight is a property that makes your code, systems, decisions in general more “complex”, difficult to debug, difficult to understand, difficult to change, difficult to change active developer.

Technical weight of code producing technical debt will usually be very large – this goes without saying. But also beautiful, perfectly written and optimized, data-oriented code can have large technical weight. It is also inevitable to have some systems like that, so what is the problem here?

I also wanted to talk about my motivation behind writing a blog post about it. Enough people cover things like code aesthetics; “smart” (hmm) code tricks to make your code shorter; fighting technical debt, or even potentially counter-productive ideas like design patterns or technical-debt inducing methodologies like naïve Scrum – but I haven’t seen many post about taking technical weight nor psychology of picking technical solutions. So while lots of things will seem “captain obvious”, I think it’s worth writing and codifying at least some of it.

Before proceeding, I need to emphasize (more on it later): every decision and solution has some technical weight. Some problems are just difficult and “most lightweight” one can be still a huge struggle. Some very heavy solutions are the way to go for given project and team and only way to progress.

Also I do not want to criticize any of examples I will give, I think they are great, interesting solutions, but just not always suitable.

Analogy one – tax / operating costs

First analogy that I would like to compare it to is similar to “technical debt” investment strategy. Imagine you want a car – and you decide to buy with cash a 1966 Ford Mustang or not to imply necessarily “outdated” technology, a new Corvette / Ferrari. Dream of many people, very expensive, but if you have enough money, what can go wrong…? Well, initial cost of the item is just the beginning. They use lots of gas. They are expensive in maintenance (forget an old car if you don’t have a special workshop or trusted car mechanic). They can break often and require replacement parts that are hard to come by. Insurance costs will be insane and in many countries, their registration cost / tax is much higher (“luxury goods”). Finally, you don’t want to take your perfect Ferrari on a dirt road.

So even if you could afford something, didn’t take a loan and bought something in technically perfect condition, initial costs are just the beginning and you will end up having to spend huge amounts of money or time and still won’t be able to do many tasks.

Analogy two – literal weight of carried baggage

Second analogy is comparing a project / product / developing a technology to packing up when going on some trip. Depending on the kind of trip (something longer than casual walk / hike), you need to pack. A minimum change of clothes, some water / food, backpack, maybe a tent and a sleeping bag… But someone can decide, “let’s take a portable grill!”; “let’s take specialized hiking gear!”; “let’s take a laptop, portable speakers and a guitar!”. All of those ideas can be decent and provide some value for certain kinds of trips, but then you need to carry them around for the duration of the whole trip. And for sure if your trip doesn’t involve a minivan, you don’t want to take all of those. 🙂 If you are just walking, weight on your back will be too heavy for a long trip – it is inconvenient, making you more exhausted, stopping you more often and in some cases, you might not be able to finish your initial trip because of this weight. You can always throw them away after some point, but this is pure waste of money / initial investment.

Back to tech

So we have some solution that is well architected, designed and written – no hacks involved. But it is very complex and “heavy” – why it *could* be bad?

  1. Required manpower to maintain it

Almost no system is ever “finished”. After you write and commit it, there will be probably much iteration. Obviously, it depends on the field of IT and domain (I can imagine some areas require very careful and slow changes – medical equipment and space rockets? Some others can rely on one-off products that when you are done you never go back to – some webpage frontends?), but in most cases when you are working on a longer term project, you (or someone else taking it over) will revisit such code again and again and again. You need someone to be able to maintain it – and the heavier is the solution, the more manpower you need. Think of it as of property tax – you will “pay “ (in time spent) on average every month some percent of project time. It can be anything from marginal 0.5% to anything like 50% – scales almost directly with quality of code /solution (but as I explained – this is not post about bad code) but also complexity.

  1. Difficulty to get new developers into the system

Very heavy, smart and sophisticated solutions can take lots of time for new people to learn and start actively working on them. Sometimes it can be even impossible within the project time frame – imagine an algorithm written by some world expert in given domain; or even a team of experts that decide to leave your project one day (it happens and it’s better to be prepared

  1. Bugfixing costs

Every system has some bugs. I don’t remember the exact estimate, but I remember seeing some research conducted on many code bases that found the average number of bugs per 1000LOC – it’s quite constant from language to language and from developer to developer (at least statistically). So more complicated systems mean more bugs, more time bugfixing, but also if they have lots of moving parts – more time spent debugging per every single bug. If your code is “smart”, then “smartness“ required during debugging is even higher – good luck on that when your are later time pressured, stressed and tired (as lots of bugfixing happens at end of projects)

  1. Complicated refactoring

Requirements change, especially in many agile-like projects like game development. If you made your project very “heavy”, any changes will be much more difficult. I noticed that this is usually when technical weight can creep into technical debt – under time pressure; you add “just one, innocent hack” (that at the end of the project, after N such worse and worse hacks means huge and unmaintainable tech debt). Or spend months on a refactor that nobody really asked for and adds zero value to your project. So technical weight and shortage of time or resources can evolve into technical debt.

  1. Complicated adding new, unrelated code

Similar to previous point, but unlike requirements changing, this one is almost inevitable. You have always systems surrounding your system, various interacting features and functionalities. Now anyone wanting to interact with it has to understand lots of its complexity. It’s a problem no matter how well interfaced and encapsulated it is; in general it is something I would require from every good programmer – if you call a method/use a class, you really should understand how it works, what it will do, what is the performance impact and all consequences.

  1. Psychological aspect

With technically heavy solutions, one of major aspect that is ignored by most developers is psychology and cognitive biases. (Side note: we often tend to ignore psychology as we want to believe we as programmers, engineers, scientists, educated “computer people” are reasonable – what a fallacy J). All kinds of biases can affect you, but I will list just few that I see very often with programmers regarding “technical weight”: “I made a decision so I see only its advantages and no disadvantages”. “We already invested so much time/money into it! We cannot back out now, it will pay off soon!”. “We have to keep on going and adding functionalities, otherwise we will regress”. “It’s more important to avoid bad performance / instability / whatever than focus on advantages of other solutions”.

To put it all together – if we invested lots of thought, work and effort into something and want to believe it’s good, we will ignore all problems, pretend they don’t exist and decline to admit (often blaming others and random circumstances) and will tend to see benefits. The more investment you have and heavier is the solution – the more you will try to stay with it, making other decisions or changes very difficult even if it would be the best option for your project.


I’ll start with an example that started whole discussion.

We were talking about so-called “GPU pipelines”. For anyone not specializing in real-time graphics, this is whole family of techniques driven by a great vision – that to produce work on the GPU (rendering / graphics), you don’t need to produce work on the CPU – no need to cull visibility, issue drawcalls, put pressure on drivers – it all (or almost all) could be potentially generated on the GPU itself. This way you can get great performance (finer granularity culling / avoiding work; also GPUs are much better at massive amounts of simple/similar tasks), have your CPU available for other tasks like AI and even allow for efficient solutions to whole family of problems like virtual texturing or shadow-mapping. What started discussion was that we all admired quality of work done by our colleagues working on such problems and how it made sense for their projects (for example for projects with dynamic destruction, when almost nothing can be precomputed; or user generated content or massive crowds), but started to discuss if we would want to use it ourselves. The answer was everyone agreeing “it depends”. 🙂 Why we wouldn’t “always” use something that is clearly better?

Reason was simple – involved amount of work of extremely smart people and complexity of not only initial implementation, but also maintaining it and extending – especially when working on multiple platforms. Maybe your game doesn’t have many draw calls? Maybe lots of visibility can be pre-computed? Maybe you are GPU bound, but not on Vertex Shading / rasterization? There can be many reasons.

Some other example could be relying heavily on complex infrastructure to automate some tasks, like building of your data / code and testing it. If you have manpower to maintain it and make it 99.999% robust, it is the way to go. On the other hand, if the infrastructure is unreliable and flaky and gets changed often – technical weight totally outweighs the benefits. So yes, maybe you don’t need to do some tasks manually, but now you need to constantly debug both the automation and the automated process itself.

Yet another example – something that will probably resonate with most programmers (I have no idea why it’s so fun to write “toy” compilers and languages; is it because it’s true “meta”-programming? 🙂 ) – domain specific languages, especially for scripting! It’s so enjoyable to write a language and you can make it fit 100% your needs, you have full ownership of it, no integration etc. But on the other hand, you just increased entry barrier for anyone new to the system, need to maintain and debug it and if it is your first language, probably it will have some bad decisions and will be difficult to extent (especially if needs to be backwards compatible). Good luck if every programmer on your team eventually adds or writes a new language to your stack

Conversely, opposite can be also technically heavy – relying on middle-wares, off-the-shelf frameworks and solutions and open-source. Costs of integrating, debugging, merging, sending e-mails to tech support… Realizing (too late!) that it lacks some essential or new functionality. Using a complex middleware/engine definitely can add some technical weight to your project. It often is a right solution – but weight has to be taken into account (if your think “we will integrate it and all problems with X are gone”, then you have clearly never worked with a middleware).

Reasons for heavy technical weight

  1. Difficult problem

First one is obvious and simplest – maybe your problem is inherently difficult, you cannot work around it, it is nature of your work. But then there is probably nothing to do about it, and you are aware of it. Your solution will provide unique selling point to your product, you are aware of the consequences – all good. 🙂

  1. Thinking that you have a problem / your problem is difficult

On the other hand, sometimes you may think that your problem is difficult, you want to solve it in a “heavy” way, but it is not or you shouldn’t be solving it. Often heavy solutions for such category of problems come from “inheriting” a problem or a system from someone. So for example – continuing work on a very legacy system. Trying to untangle technical debt caused by someone else N years ago with gradual changes (spaghetti-C code or lava cake OOP code). Trying to solve non-tech (cultural? people?) problem with tech solutions – category that scares me most and is a proof of our (yes, almost every engineer me included falls into such fallacy) technocratic arrogance. 🙂 There are numerous problems that only seem very difficult – but it’s pretty hard to see it. Advice here – just ask your colleagues for a second opinion (without biasing them with your proposed solution); if both you and them have some extra time, don’t even phrase the problem yourself, let them discover it partially themselves and comment on it.

  1. Not enough scoping

Often not scoped user story will describe a very complex system that needs to do EVERYTHING, has tons of features, functionalities, all possible options, work with all existing systems. You as a programmer will want it to also have great performance, readable code etc. If you don’t apply some scoping, splitting implementation stages and don’t allow for users to start giving you feedback on early iterations (“hey you know, I actually don’t use those options”), you are probably guaranteed to end up with too heavy solution.

  1. a. Future coding

To explain the term – excellent blog post from Sebastian Sylvan that inspired me and helped grow as a programmer.

This is subcategory of 3, but even worse – as your over-engineering doesn’t even come from the user! It is programmer trying to be overly abstract and predicting user problems ahead. On its own it is not a bad thing, but instead solution should be as always – KISS, write simple systems with not many moving pieces and strings to outer world that you can replace.

  1. Not willing to compromise

This one is really difficult as it’s not a bad thing per se in many situations… Sometimes you need to sacrifice some aspects of final result. Is 5% performance increase worth much more complicated system (10x more code)? Is having automatic boilerplate code generation worth spending months on some codegen system? It always depends on so many factors and you cannot predict the future… And for sure if you are open minded you would agree that some past decisions you made were bad or even regret them.

  1. Not enough experience seeing the technical weight and evaluating consequences

This is a problem mostly of junior programmers – I remember being inexperienced and jumping with enthusiasm to every single new feature and request, writing complicated things and wanting them to be the best in every possible aspect. Shipping few products, especially if dealing with consequences means lots of effort/problem usually teaches more humility. 🙂

  1. Programming dogmas / belief

Horrible, horrible reason to add technical weight. Someone says that it has to be “true OOP”, “idiomatic C++”, “this design pattern”, or obey some weird religious-like arguments. I heard of people saying, “oh you should code in this way, not that way, because this is the way you do things in Java”. Or recently that multi-inheritance is better than composition or polymorphism in general better than if statements (wtf?). My advice? If you don’t feel that you have enough energy to inspire a major cultural change with months of advices, examples, discussions and no guarantee of succeeding – then you don’t want to work with such zealous people.

  1. Fun!

Ok, this is a weird point, but if you enjoy programming and problem solving I am sure you will understand.

We like solving complicated problems in complicated ways! C&P 5 lines of code is not so “fun” as writing complex code generator. Using off-the-shelf solutions is not as enjoyable as writing your own domain specific language. In general – satisfaction and dopamine “reward” from solving a complex problem and owning a complex, sophisticated system is much higher than simple solutions to a reduced/scoped problem. We want to do cool things, solve ambitious problems and provide interesting solutions – and it’s ok to work on them – just admit it; don’t try to lie to yourself (or even worse your manager!) that it is “necessary”. Evaluate if you have some time for this fun coding and what kind of consequences it will have in 1/3/6/12/24 months.

  1. Being “clever” and optimizing for code writing, not reading

It is something between points 6 and 7, but fortunately often happens on very small scale (single lines/functions/classes); on the other hand unfortunately it can grow and later impact your whole codebase… Some programmers value perceived “smartness” of their solutions, want to solve mental puzzle, impress themselves or other programmers or optimize time spent writing the code. They would hide complexity using some macros, magic iterators, constructors and templates. This all adds lots of technical weight – but can slip through code design and reviews. Good luck reading and debugging such code later though!

  1. Career progress – individual

Thanks for Kris Narkowicz for pointing out that I forgot about very common reason for people trying to write sophisticated, over-ambitious systems and solutions.
I think it can be split into two subcategories.
First subcategory is individual growth. Pretty interesting one, as it’s something between 7 and 8. We want to do interesting things and get challenged every day, working on more and more ambitious things to develop our career and skills. Many programmers are ambitious and don’t treat their work just as “day job”. They would like to develop something that they could do talks on conferences, be proud of having in CV/portfolio or even contribute to the whole computer science field. Easy solutions and simple tasks don’t get you this and they don’t leave a “legacy”. You can do it in your spare time, contribute to open source etc. – but you have limited time in your life and understandably, some people would prefer to do it at work.
Again – something that is ok on a limited scale – if you never do it, you won’t feel challenged your career will stagnate and you can get burned out (and eventually look for more interesting / ambitious job). Just make sure it is not unnecessary, common pattern and doesn’t eat up majority of your time (and especially doesn’t cause lots of work for others…).

  1. Career progress – in the organization structure

Similar to previous one – but this point it is not driven by the individual and their goals and ambitions, but weird organization structure and bad management. Pathological organizations will promote only people who seem to do very complex things. If you do your job right, pick simple solutions, predict problems and make sure they never happen – in many places you won’t be appreciated as much as someone who writes super complex system, puts lots of overtime into it, causes huge problems and in the end “saves the day” last night. As I said – it is pathological and not sustainable. It definitely should be recognized and fixed in the organization itself.
I even heard of major tech companies that make it a clear and explicit rule to get bonuses and getting promoted – that you need to be an author and owner of whole systems or products. I understand that it is easy and measurable metric, but in the long run problems and pathological behaviors outweigh benefits; it can be detrimental to any team work and good working culture.

How to deal with technical weight?

You will be probably disappointed by length of this paragraph, but there are no universal solutions. But – being aware of technical weight will help you make better decisions. Thinking about every problem, evaluate:

– How much technical weight you can carry on? Do you already have some heavy systems and spend a lot of time supporting them?

– For how long do you need to carry this weight? Is your product / tech a sprint, or a marathon / “trip around the globe”?

– If your team gets reduced or main contributors leave, are you going to be able to continue carrying it?

– Is more heavy solution providing you some unique value? Are your users going to be able to iterate faster? Is it some unique feature that customers will truly appreciate? Are you going to be able to use for example cutting edge performance to make your product unique?

– What are disadvantages of lighter solutions? Are they really unacceptable? Can you prove it, or is it just intuition driven by biases?

– Are you proposing this solution, because it is more interesting and fun problem to work on? Trying to be “smart” here and want to do some impressive work?

– Are you psychologically biased towards this solution? Did you already invest a lot in it? Is there ego aspect? Are you obsessed with loss aversion? Have you really considered alternatives with an open mind and others suggest same solutions without guiding them?


To close this post, an observation that I had looking at how different is to work with teams of different sizes – adding technical weight can be most difficult for medium-sized teams.

Usually small, experienced teams will add some technically complex and heavy solutions to add unique value to their unique, hand crafted product. There are some specific teams (often come from demoscene) and games that have some very original, beautiful tech that could be difficult to maintain and use for anyone bigger (and good luck convincing lots of people in a bigger company to such risky ideas!). If you like video games and rendering, you probably know what studios and games I talk about, Media Molecule and Q-Games and their amazing voxel or SDF based tech. There are more examples, but those 2 come immediately to my mind.

On the other hand, technically heavy solutions are also suitable for giants. EA (specifically their engine, Frostbite division and amazing work they do and publish) or Ubisoft (that this year at the GDC broke record of valuable, inspiring and just great technical presentations) can invest lots of money and manpower for R&D and maintenance of such technology and because it is shared between many products, it will pay off. So even if solutions are “heavy”, they can manage to develop them.

Medium-sized teams have neither of those advantages – usually they have to have many different features (as targeting wider audience – costs – they are not able to stick to a single unique selling point), but don’t have enough manpower to waste time on too complex problems and endless R&D. Therefore they have to choose appropriate solutions carefully, calculating ROI per every single decision. There is again a psychological aspect – having team of let’s say 6-10 developers, you might think that you can do a lot more – and even if you do your planning perfectly and reasonably, having even a single person leave or tech requirements change can totally shift the scales.

Nothing really special here – but working with technical weight is like working with psychological biases – everyone has them, but just being aware of them makes you able to make better, less biased decisions. I recommend to read from time to time about cognitive biases as well – and analyzing own decisions looking for them.

Special thanks

Special thanks go to my friends who inspired whole discussion few months ago – alphabetically – Mickael Gilabert, John Huelin, David Robillard and lots of insight into the problem. Miss such inspiring conversations with you guys!
Extra special thanks for Kris Narkowicz for pointing out some important missing reason for technical weight.

Posted in Code / Graphics | Tagged , , , , | 2 Comments

White balance and physically based rendering pipelines. Part 2 – practical problems.

White balance and lighting conditions

After this long introduction (be sure to check part one if you haven’t!), we can finally go to the problem that started whole idea for this post (as in my opinion it is unsolved problem – or rather many small problems).

The problem is – what kind of white balance you should use when capturing and using IBLs? What color and white balance you should use while accumulating lights. When the white balance correction should happen? But first things first.

Image based lighting and spherical panoramas. Typical workflow for them is fixing the exposure and some white balance (to avoid color shifts between shots), taking N required bracketed shots and later merging them in “proper” software to panoramas. Then it is saved in some format (DDS and EXR seem to be most common?) that usually is only a container format and has no color space information and then after pre-filtering used in the game engine as the light source. Finally magic happens, lighting, BRDFs, some tonemapping, color grading, output to sRGB with known target monitor color balance and view conditions… But before that “magic” did you notice how vaguely I described white balance and color management? Well, unfortunately this is how most pipelines treat this topic

Why it can be problematic?

Ok, let’s go back to setting of white balance, during single time of day, same sun position – just a photo captured in shadows and in the sunlight.

Photo 1, camera auto WB (5150K) – cool look, blueish shadows.

Photo 1, camera auto WB (5150K) – cool look, blueish shadows.

Photo 1, Lightroom/ACR Daylight WB (5500K) – warm, sunny look, but still slightly blue shadows.

Photo 1, Lightroom/ACR Daylight WB (5500K) – warm, sunny look, but still slightly blue shadows.

Photo 1, Lightroom/ACR Cloudy WB (6500K) – shadows have no tint, but the photo looks probably too warm / orange.

Photo 1, Lightroom/ACR Cloudy WB (6500K) – shadows have no tint, but the photo looks probably too warm / orange.

Photo 2, same time and location, behind the building, camera auto WB (5050K) – blue color cast.

Photo 2, same time and location, behind the building, camera auto WB (5050K) – strong blue color cast.

Photo 2, same time and location, behind the building, daylight WB (5500K) – blue color cast.

Photo 2, same time and location, behind the building, daylight WB (5500K) – blue color cast.

Photo 2, same time and location, behind the building, cloudy WB (6500K) – color cast gone (maybe slight hint of magenta).

Photo 2, same time and location, behind the building, cloudy WB (6500K) – color cast almost gone (slight hint of magenta).

Imagine now that you use this images as an input to your engine IBL solution. Obviously, you are going to get different results
 To emphasize the difference, I did a small collage of 2 opposite potential solutions.

Same wb for both scenes.

Same WB for both scenes.

Different, dynamic/changing WB.

Different, dynamic/changing WB.

In this example the difference can be quite subtle (but obviously shadowed parts either get blueish tint or not). Sometimes (especially with lower sun elevations – long shadows, lower light intensities, more scattering because light travels through larger layer of the atmosphere) it can get extreme – that it is impossible to balance wb even within a single photo!


White balance set for the sunlit areas, 4300K. Everything in shadows is extremely blue.

White balance set for the sunlit areas, 4300K. Everything in shadows is extremely blue.

White balance set for the shadowed areas, 8900K. People would appear orange in sunlit areas

White balance set for the shadowed areas, 8900K. People would appear orange in sunlit areas

“Neutral”/medium white balance, 5350K. People probably wouldn’t look right neither in shadows (too strong blue tint) nor in sunlight (too strong orange/yellow tint). This is however how I think I perceived the scene at this time.

“Neutral”/medium white balance, 5350K. People probably wouldn’t look right neither in shadows (too strong blue tint) nor in sunlight (too strong orange/yellow tint). This is however how I think I perceived the scene at that time.

What is really interesting is that depending on which point you use for your IBL capture (is your WB and grey card set in shadows or in sunlit part), you will get vastly different lighting and scene colors.

So, depending on the white balance setting, I got completely different colors in lights and shadows. It affects the mood of whole image, so should depend on artistic needs for given photograph/scene, but this level of control cannot be applied on the final, sRGB image (too small color gamut and information loss). So when this artist control should happen? During the capture? During lighting? During color grading?

Digression – baking lighting, diffuse bounces and scattering taking full light spectrum into account

Quite interesting side note and observation to which I also don’t have a clear answer – if you go with one of the extremes when taking your panorama for IBL, you can even get different GI response after the light baking! Just imagine tungsten lights in a blue room; or pure, clear sky early afternoon lighting in an orange room – depending if you perform the WB correction or not you can get almost none multi-bounced lighting versus quite intense one.

The only 100% robust solution is to use spectral renderer. Not many GI bakers (actually, are there any?) support spectral rendering. Some renderers start to use it – there was an interesting presentation this year at Siggraph about its use at Weta “Physically Based Material Modeling at Weta Digital” by Luca Fascione (slides not yet there…).

In similar manner Oskar Elek and Petr Kmoch pointed out importance of spectral computations when simulating scattering and interaction of atmosphere and water: .

I don’t think we need to go that far – at least for this new console generation and until we eliminate much more severe simplifications in the rendering. Still – it’s definitely something to be aware of

White balance problem and real-time rendering / games

Coming back to our main problem – since all those images come from a photograph and natural, physical light sources this is an actual, potentially serious problem –if you are using or considering use of real-world data acquisition for the IBLs and/or scientific physical sky models for sky and ambient lighting.

Just to briefly summarize the problem and what we know so far:

  • Different times of day and lighting conditions result with completely different temperature of light even just for sun and sky lighting.
  • Panoramas captured with different white balance camera settings that could depend on the first shot will result with completely different color temperature.
  • Images or light sources with different color temperature don’t work well together (mixing/blending).
  • If you stay with 1 white balance value for every scene, uncorrected lighting will look extreme and ugly (tungsten lights example or extremely blue skies).
  • Sometimes there are many light sources in the scene with different color temperature (simplest example– sky/sun) and you cannot find a white balance that will work in every situation; you can end up with strong tint of objects when only 1 of light sources is dominating the lighting (the other one in the shadows) no matter what are your settings.
  • Different white balance can achieve different mood (cool / warm; day / afternoon / evening; happy / sad) of final presented scene – but it is not (only) part of color grading; usually it is set for “raw” data, way before setting the final mood; when computing the sky model or capturing skies / panoramas.

Such summary and analysis of the problem suggests some possible solutions.

Before suggesting them, I’m going to write about how those problems were “hidden” in previous console generation and why we didn’t notice them.

Why in previous generation games and non-PBR workflows we didn’t notice those problems? How we solved them?

In previous console generation, light color and intensity and ambient color were abstract, purely artistic concepts and often LDR and “gamma“. The most important part is especially the ambient color. “Ambient” was defined in many alternative ways (ambient hemispheres – 2 colors, ambient cube – 3-6 colors), but it had nothing to do with sky and sun rendering.

So lighters would light environment and characters to achieve specific look and color balance (usually working mentally 100% in sRGB), not taking into consideration any physical properties and color temperature of light sources to achieve specific look that they or the art director envisiounes. Even with HDR workflows, ambient and light intensities had nothing to do with real ones, were rather set to be convenient and easy to control with exposure. In Assassin’s Creed 4: Black Flag we had no exposure control / variable exposure at all! Very talented lighting artists were able to create believable world working in 8bit sRGB as both final and intermediate color space!

Then concept, environment and effect artists would paint and model sky and clouds, throw in some sun disk flare or post effect god rays and viola. Reflections were handled only for by special system of planar (or cube) reflections, but indirect speculars were virtually non-existent. Metals had some special, custom, artist authored (and again sRGB/gamma) cubes that had nothing to do with the sky and often the environment itself.

This doesn’t mean that there was no problem. Wrong handling reflections and indirect specular lighting was one of many reasons for the transition for PBR workflows (and this is why so many engines demos show buzz-worded “next-gen PBR rendering” with metal/wet/shiny areas 😉 ). Without taking properly environment and image based lighting into account, surfaces looked flat, plastic, and unnatural. When we integrated IBL and “everything has specular and Fresnel” workflows, suddenly artists realized that different sky and reflections can look wrong and result in weird rim lighting with intensity and color not matching the environments. Things would get completely out of control

Also modern normalized and energy conserving specular distribution functions, physically correct atmospheric and fog effects and energy conserving GI started to emphasize the importance of high dynamic range lighting (you don’t have big dynamic range of your lights? Well, say goodbye to volumetric fog light shafts). As intuitively understanding differences in lighting intensity of many EV within a single scene is IMO virtually impossible, to gain to get consistent behavior we started to look at physical and correct sky and sun models. This on the other hand – both if using analytical models as well as IBL/photogrammetry/real captured panoramas – shown us problems with the white balance and varied color temperature of lighting.

Intuitively set and “hacked” proper, film-like color balance in Witcher 2. Lots of work and tweaking, but at that time the result was amazing and critically acclaimed –all thanks to good artists with good understanding of photographic concepts and hours and hours of tweaking and iteration...

Intuitively set and “hacked” proper, film-like color balance in The Witcher 2. Lots of work and tweaking, but at that time the result was amazing and critically acclaimed –all thanks to good artists with good understanding of photographic concepts and hours and hours of tweaking and iteration…

Before I proceed with describing more proper solutions I wanted to emphasize – importance of it now doesn’t change the fact that we were looking at importance of color balance even with previous, physically not correct games. It was partially achieved through lighting, partially through color grading, but artists requested various “tweaks” and “hacks” for it.

On The Witcher 2 I had a privilege to work with amazing artists (in all departments) and lots of them were photographers and understood photography, lighting and all such processes very well. We experimented with various elements of photographic workflow, like simulation of polarizing filter to get more saturated skies (and not affecting the tonemapping too much). Or sometimes we would prototype total hacks like special color grading (as you can imagine usually blue) applied only in shadows (you can imagine why it was quite a terrible idea 😀 but this shows how intuitive need for specific look can be attempted to be achieved in “hacked” ways).

With one of artists we even tried to simulate dynamic white balance (in 2011/2012?) – by changing it in artist authored way depending on the scene average luminance (not average chrominance or anything related) – to simulate whole screen color cast and get warmer WB in shadows/darker areas; and on the other hand had nice color variation and blueish shadows when the scene had mostly sunlit areas.

Now lots of it sounds funny, but on the other hand I see how artists’ requests were driven by actual, difficult physical problems. With better understanding of PBR, real world lighting interactions and BRDF we can finally look at more proper/systemic solutions!

Dealing with the white balance – proposed solutions

In this paragraph I will first present my general opinions but later group some “alternatives” I don’t have strong opinion about.

One thing that I would assume is a must (if you don’t agree – please comment!) is sticking with a single color balance value for at least a scene (game fragment / time of day?). This should be done by a lighter / artist with strong photographic / movie background and good understanding of white balance. This way you get proper analytical lights interactions and realistic resulting colors (warm sun color and cooler sky color balance out perfectly).

One could argue that “old-school” way of mixing various light sources with different color balance, just artists making them quite “neutral” and not tinted and rely on color grading is “better”/”easier”. But then you will never get natural looking, blueish shadows and perfect HDR balance of light/shadow. Also you lose one of biggest advantages of PBR – possibility to reason about the outcome, to verify every element in the equation, find errors and make your lights / materials / scenes easily interchangeable (consistency in whole game!).

Ok, one light white balance value. How should you chose this color balance? I see two options:

  1. Sticking with a single value for whole game – like this common final sRGB 6500K for D65. And therefore achieving all the proper color warmth / coolness via final color balance/grading/correction only. It is definitely possible, but without color balance applied (pre color correction), some scenes will look extremely weird (orange/blue or even green if you have fluorescent lights). You also need to do your color correction in a gamut wider than regular sRGB (which is assumed to be in final white balance) – just like for proper photo WB you need “RAW files“. I see many more benefits of color grading in wider color spaces and higher dynamic range (maybe I will write more about it in future), but it’s not something that many titles seem to be doing now.
  2. Picking this value per scene to “neutralize” white balance either in shadows or sunlit parts (or maybe a 50/50 mix of both?). It gives much easier starting point, nice looking intermediate values, understandable and good looking HDR sky cubemaps and artist-authorable skydomes – but you need to be aware of it all the time! Also blending between various lighting conditions / zones becomes more tricky and you cannot as easily swap elements in the equation – “lets take sky capture from the noon and sun radiance from the morning” won’t work. On the other hand it is probably not very good idea anyway. 🙂 Still, it can be more difficult for games with dynamic / varying time of day.

Finally, should the color corrected, display white balance be fixed or should it auto-adapt? This is the most controversial topic. Lots of programmers and artists just hate automatic exposure / eye adaptation
 And for a good reason.

It is very difficult (impossible) to do properly
 and almost nothing really works just like artist / art director would like it to work
 too much contrast / not enough contrast; too dark / too bright; too slow / too fast. It’s impossible to be solved completely – no matter how accurate, you can imagine manual settings being artistically better and better serving the scene.

And yet, here we talk about adding an additional dimension to the whole auto-camera-settings problem!

I see 2 “basic” solutions:

  1. Accepting no white balance eye adaptation and different color cast in shadows and lights. For crucial cinematics that happen in specified spots, manually overriding it and embedding in color grading/color correction fixed to this cinematic. Either fading it in smoothly, or accepting difference between camera cuts.
  2. Adding auto white balance. I still think that my original “old-school” idea of calculating it depending on the average reflected luminance + knowledge of the scene light sources can work pretty well
 After all, we are lighting the scene and have all the information – way more than available in cameras! If not, then taking diffuse lighting (we definitely don’t want to take albedo into account! On the other hand, albedo is partially baked in with the GI / indirect lighting
) and calculating clamped/limited white balance.

But I see a third one and that can actually work for auto-exposure as well:

  1. Relying on some baked or dynamically calculated information about shadows, sky visibility averaged between some sparse points around the camera. We tend to perceive white balance and exposure “spatially” (and averaging values when looking around), not only depending on current “frame” (effective sharp FOV vs. lateral vision) and I see no reason why we shouldn’t try it in real time rendering.

For me this is quite fascinating and open topic and I’m still not sure what kind of attitude I would advocate – probably it would depend on the title, setting, lighting conditions, and existence or not of day cycle etc.

I hope I presented enough ideas and arguments to inspire some discussion – if you have any feedback on this topic, please comment!

(Next paragraph is just a small rant and feel free to skip it.)

Is research of Physically Based Rendering workflows inventing new / non-existent / abstract problems?

This sub-chapter is again a digression, probably could be in any article about anything PBR workflow-related.

But yet this is a question I keep hearing very often and probably any programmer or technical artist will hear it over and over. If you think a bit about it, it’s like with deeper and deeper understanding of physically-based-rendering and more and more advanced workflows we “invent” some new problems that didn’t exist in the past…

Yes, everyone who is in the industry for a while has shipped games with gamma-space lighting, without properly calibrated monitors (my pet peeve – suggestions of “calibrating” for an “average” TV
), with no care for sRGB/REC-709, color spaces, energy conservation, physical light units, EVs or this unfortunate white balance

In Witcher 2 we didn’t care about any BRDFs; most reflective surfaces and their indirect speculars were achieved through artist-assigned per-shader tweaked cubemaps – so characters had different ones, environment assets had different ones, water had different ones
 And still the game looked truly amazing at that time.

Does that mean we shouldn’t care for understanding of all those topics? No, definitely not. We are simply further on the curve of diminishing returns – we must invest much more, gain way more understanding and knowledge to progress further. Things are getting more complicated and complex, it can be overwhelming.

But then you can just look at recent titles like Assassin’s Creed: Unity, The Order 1886. The upcoming Frostbite engine games like Star Wars: Battlefront. Demos from Unreal Engine 4. Photogrammetrically scanned titles like The Vanishing of Ethan Carter made by teams of 7 people! You can clearly see that this extra know-how and knowledge pays off.

Good luck getting such fidelity of results in such scale with old bag of hacks!

Posted in Code / Graphics, Travel / Photography | Tagged , , , , , , , , , , , , , , , | 7 Comments

White balance and physically based rendering pipelines. Part 1 – introduction.

This is part one of the whole article. Part two is here.

In this two posts (started as one, but I had to split it to make it more… digestible) I’m going to talk a bit about the white balance. First part will describe what it is, how human white perception works and how both vintage and modern photography used to and currently deals with it and provide perceptually “correct” tones. If you don’t feel very confident about white balance management in lighting, photography and output color spaces – I hope it will provide some interesting new facts.

The second part will focus more on practical application. I will analyze which steps of color and white balance management are missing in many games pipelines, what we ignore and what we used to “hack” in previous generation of renderers and how. I will present some loose ideas on how we can include this knowledge and improve our handling of light temperature.

White color

Remember that gold/white and black/blue dress and weirdly vivid discussion about it in the internet?

Source: Tumblr/Swiked

Source: Tumblr/Swiked

There are already many posts trying to describe what is going on there (lighting conditions, light exposure) and I won’t be going more into it, but wanted to use it as an example – that color that is clearly RGB blue on a picture and when checked with a color picker can be considered white (and to be honest I was one of those gold/white people 😉 ).

Why is it so? The main reason is that in nature there is no single “white” color. As you know, light can have different wavelengths and every wavelength (within range of human perception of ~390-800nm) has some color corresponding to the response of eye.

Source: wikipedia

Light wave lengths and visible color. Source: Wikipedia

No white color there
 So what defines white? When all eye photoreceptor cones are excited the same way? When camera Bayer-filtered cells generate same electrical output? When all layers of film are exposed?

Fujifilm Provia color layer sensitivity curve, source: Fujifilm

Fujifilm Provia color layer sensitivity curve, source: Fujifilm

Unfortunately no, it is way more complex and there is no single definition of white (though the last answer – film sensitivity is closest as film is very “defined” medium and with slides /positive process it can be viewed directly).

Usually, white is defined by specific spectra of light sources of certain color temperature. Color temperature and its use in photography and color science is in my opinion quite fascinating concept, as it comes from physics and black body radiation.

Color temperature of a natural (but also artificial) light source is a value assigned to a color perceptually similar to a color emitted by a perfect black body of given temperature. Blackbodies of different temperatures emit a different wavelength spectra and perceptually those spectra will appear different color.

Source: wikipedia

Source: wikipedia

While some cooler blackbodies seem clearly red, range of yellowish-blueish ones (4000-8000K?) can be used to define white color.

Look in your monitor settings, you will find some color temperature setting that most probably (and hopefully – if you work with colors) will show 6500K, a common value for “daylight”.

Ok, so is this 6500K a white color? No

I will describe it in a second, but first – where this whole 6500K comes from? Quick google-fu will verify that it is not the actual physical temperature of the sun (which is ~5800K). After passing through the atmosphere, sun perceived temperature gets even colder (Rayleigh scattering), resulting in actually warmer (in artistic terms) colors.

Note: the naming of warm/cold white balance is very confusing, as artistically “warm” colors correspond to “colder” black bodies and color balances! But using “warmer” temperature of a light source (and perceptually colder color) as the white point will “warm up” colors in the scene! This sounds crazy and confusing, but please keep reading – I hope that after both posts it will be easier.

On the other hand, we get the atmospheric in-scattering and perceptually blue skies and sky lighting also contributes to overall light color and intensity. This 6500K is average daylight light temperature during a cloudy, overcast day (after multi-scattering in the sky and clouds) – neither the temperature of the sun or the sky on its own. In my camera settings this color temperature is also referred to as “cloudy” white balance. Naming of white balance settings is not standardized (to my knowledge) and may vary from device to device.

Finally, spectrum of a ~6500K black-body defines so called Illuminant D65, a CIE standardized definition of daylight corresponding to +/- mid European mid day light temperature. What is most important is that this D65 is a standard white color when outputting information in sRGB or REC-709, our standard formats when generating output images for display in the internet or on the HD TV. And this is why you really should make sure that lights in your studio are the same daylight D65 color temperature (and see the next paragraph).

Perception of white

Ok, so I mentioned that 6500K can be white and it depends. What defines it in reality, outside of artificial monitor color spaces?

Eyes are not our primary vision perception device. They are just lens and sensor combination. Our main source of vision and perception is obviously brain. Brain interprets white differently depending on the lighting conditions.

I intuitively think of it as of brain trying to interpret white in terms of actually reflected color, white albedo. Have you ever been in one of old towns (one of things I miss so much about the Europe!) lit by gas-lamps or old, tungsten lamps? Like the one on the perfect masterpiece from Van Gogh:

“Cafe Terrace at Night”, Vincent van Gogh

“Cafe Terrace at Night”, Vincent van Gogh

The painting is quite bright, but you can clearly identify a night scene, by seeing yellow lamps and the blue sky. Still, you can see some white(ish) brush strokes in the painting (tables).

Colors are pretty saturated, it is clearly (post)impressionist and stylized, still looks believable. This is how the artist saw and imagined that scene and this is how we accept and recognize it.

What is really fascinating is what happens if you try to take a photo of such scene with the same camera settings as during the day. Would it look like on this photo?

I found a preeeetty old photo (while still in RAW file) in my personal collection in beautiful Spanish Malaga and tried that experiment (note: zero color grading).

Let’s set color temperature to D65! It’s the standard, right?


Hmm, pretty orange and ugly
 Especially people skin tones look unnatural and uncanny (unless you enjoy over-the-top color grading of blockbuster Hollywood movies).

Let’s correct it – set the white balance for tungsten (as while I didn’t have a handy spectrograph, I expect those lights to have tungsten – wolfram – filaments 😉 ).


More natural looking and more similar to the master’s painting (except for saturation, composition, artistic value and all that stuff, you know 😉 ).

Similar example (also not very artistically appealing, but even more extreme as there are no billboard lights to neutralize the tungsten – this time from beautiful Valentia), D65 version:


…and after the correction:


I hope those 2 examples proved how human perception can differ from a photograph in orange lights. But in my collection I found an example of opposite effect – camera exposing scene for daylight at evening when all lighting comes from the sky, resulting in extreme blueish color cast. (btw. this is how it looked straight from camera and showing how poor job it did at auto white balancing – this is from old Nikon D90). As lovely Edinburgh, this time not D65, but ~5300K (no idea why camera would set it this way…):


The same photo with corrected white balance (50 000K – insanely high!) looks like this:


Snow and skin color look much better and more natural. (On the other hand, the photo lost impression of evening darkness and accidental, color-graded cooler winter atmosphere; note that this is trick used in cinematography – filming during the day using white, bright lights and color grading blue to simulate night and evening scenes).

So what is this photo white balance? What does that mean? If you never worked with RAW conversion software or were not digging in camera menus, you could be surprised, but even mobile phone cameras adjust the white balance (on iPhone you can check out Camera+ that allows you to play with various white balance settings).

Small side note – if you have many light sources of different color temperature can result in weird, ugly and un-correctable pictures. Common white flash that completely doesn’t match the scene lighting and makes people look unattractive is an example – and this is why Apple implemented dual LED flashes in their new iPhone built-in cameras, a feature I was really enthusiastic about.

White balance and eye adaptation

White balance defines what color temperature (and tint – for artificial light sources or for the scenes dominated with bounced light color e.g. green bounce from foliage) is expected to be white. It is also the color that your brain expects to be white in given light conditions.

Think of it as of color equivalent of exposure adaptation – eyes and brain slowly adapt to the lighting conditions. Doesn’t matter if they are 25 EV stops apart (so hmm ~33 million times brighter?) – after a while and if you have good eye sight, you will adapt to those new lighting conditions.

In the same manner, your brain knows already light types, knows materials, knows what it was thinking is white and what you expect to be white – and slowly adjusts to this expected white color.

I have an interesting example – swimming googles. Some cheap tinted googles I bought a while ago (work very well if you swim mainly outdoors).


What is really interesting is that at first, after just putting them on, the tint seems extreme. However after 20-30 minutes of swimming, eyesight adapts completely, I no longer notice any tint at all. And after taking them off, everything looks orange / sepia-like and “warm”. 🙂 I tried to reproduce this experiment using my camera, RAW files and some Lightroom / Photoshop.

Day wb and a photo straight through the googles.

Day wb and a photo straight through the googles.

Almost-corrected WB (geek side note – notice how edges have different color cast due to more color absorption because of larger optical depth).

Almost-corrected WB (geek side note – notice how edges have different color cast due to more color absorption because of larger optical depth).

I wasn’t able to perfectly correct it, as I run out of the WB scale in the Adobe Lightroom (!). It would be interesting to see if those colors are outside of the camera sensor gamut and even in RAW format, or is it only limitation of the software and UX designers clamping slider to “reasonable”/ usable range.

I tried also as an experiment correcting the WB of a resolved JPEG file (so not RAW sensor data). Results look worse – lots of precision loss and banding. I kind of expected it, but this is worth emphasizing – if you ever do strong white balance corrections, never do them in sRGB / 8bit space, but with as high dynamic range and gamut as possible.

Result of trying to correct WB in Adobe Camera RAW using a JPEG file.

Result of trying to correct WB in Adobe Camera RAW using a JPEG file.

Finally, I wanted to check how properly exposed shot would behave after “taking off the goggles” and simulating the eye adaptation, so using same WB as the one used to correct swimming goggles.

Day WB, same scene, no goggles.

Day WB, same scene, no goggles.

Same scene, same WB like for correcting the goggles tint.

Same scene, same WB like for correcting the goggles tint.

Success! Looks almost exactly same like I perceive the scene after taking them the googles after longer swim. So this crude experiment proves that human white perception is at least similar to the camera white balance correction.

…and this is why I mentioned that your studio lighting conditions should be uniform and match the target daylight temperature / D65 color space. Otherwise if your eyes get adapted to surrounding “warmer” or “colder” color, you will perceive colors on the screen “wrong” and will end up making your images or video game output too warm/cold and wrongly balanced!

White balance and professional photography

I mentioned that most cameras – from your handy mobile phone through point and shoot camera but also up to professional, full-frame ones – have an option for automatically setting the white balance with literally zero user intervention. This is not setting the output color space white balance (sRGB or Adobe RGB, which is more friendly when outputting images not only for monitor display, but also for print), but neutralizing the color temperature of incoming light.

I have to admit I’m not 100% sure how it works (Sony/Canon/Nikon trade secrets?) – definitely seems more sophisticated than calculating average captured color temperature. My guess would be them having some reference db or approximate fitted algorithm that bases on “common” scenarios. Maybe per-scene, maybe only depending on the histogram. But no matter how smart is this algorithm all of those algorithms fail from time to time and you can end up with a photo with a wrong color balance.

This is not a problem when working with RAW files – you can correct them later and interestingly, Adobe software seems to have much better Auto WB than any camera I used so far. But this is definitely not enough for professional goals. You cannot get consistent and coherent results when relying on some heuristic algorithm with not enough data. Professional photography developed some guidelines and process how to do it robustly.

Getting correct white and color balance is quite well established process, but it consists of many steps and components and failure at one point can result in wrong colors. I’m not going to cover here very important topics of having properly calibrated monitors, proper lighting in the studio / room, working in consistent, proper color spaces and color management. I will focus only on usual ways of acquiring source data with properly calibrated colors.

The main difficulty during acquisition part comes from the fact that a photography captures usually reflected (and/or scattered) light. So we are getting results of convolution of the complex lighting environment, BRDF and material properties. Therefore it is difficult to set up what is reference “white” color when green-filtered sensor pixels can get green light because of either green albedo or green bounced lighting. Easy option to solve this equation is to introduce into scene reference objects that have known properties and by measuring their response to the lighting environment, figuring out properties of light, its color and perceptual white.

Sounds complex, but in its simplest variant is just setting “white balance” using color picker on objects that you know are grey or white (as a last resort I end up looking for eye whites or teeth in the photograph 🙂 gives good starting point, even with tired eyes reddish tint).

More professional option is using “professional” reference materials, like grey/white cards, grey diffuse or perfect chrome balls.

Semi-pro black/white/gray cards.

Semi-pro black/white/gray cards.

Even better option is to use range of calibrated, known materials. There are commercial products like X-Rite Color Checker that contain printed various known colors and they allow to analyze not only single color temperature/tint (which is obviously over-simplification of the complex light spectrum!), but also more complex, spectral properties. There are programs that allow to create special camera color profiles for given lighting conditions.

Color checker in action.

Color checker in action.

I mentioned that this is not easy thing to do, because it relies on very good discipline and remembering about many steps.

In theory any time lighting conditions change (either if you change the angle from which you approach your subject or cloud cover or sun moves), you need to re-capture and re-calibrate colors. This can be long, tedious and easy to confuse process… But following such simple guidelines you are definitely able to capture properly calibrated, natural looking objects and colors of albedo, natural skin colors and portraits etc.

White balance before the digital sensor era – film color response

How did color management look like in times of color film photography?

I’ll be honest – I never have worked with it professionally, only for fun, but what was fun for me, didn’t seem like very fun for people who want to achieve perfect, consistent white balance in their prints

Every film has different color response and specific color cast. There is greenish Velvia, purpleish Portra, brownish-warm Provia, blue-ish Ektar, red-ish Ektachrome…

Contrasty and slightly brownish Provia 400X.

Contrasty and slightly brownish Provia 400X. My home city of Warsaw and Wola train station graffitis.

Fujifilm Portra 400 – violet all the way!

Fujifilm Portra 400 – violet all the way! Warsaw botanical garden.

On the other hand, the film response is fixed and quite low dynamic range (actually pretty ok dynamic range for negative film to make prints, but very small for positive/slides). You cannot change the white balance…

What is their white balance btw.? Which color temperature will produce white albedo on a print? Most films used fixed white balance of around ~5000K (direct, warm sunlight – note that it’s not “digital era standard” of ~6500K). There were some specific films for tungsten lights (like Ektachrome 160T balanced for 3200K) and AFAIK some fluorescent-light balanced ones, but that’s it!

This is the reason if you ever shot film more “professionally”, you probably literally had a bag full of colored filters. Warming daylight filters for cloudy days and shooting in shade, purple filters for correcting green, fluorescent lights and blue filters for tungsten lights. Lots of it was trial-and-error with an unknown outcome (until you develop the film and prints!) and also using such filters meant some luminance loss – which with much lower light sensitivities (anything above 100-200 was starting to get grainy
) of film was definitely undesired…

Ok, this wraps up part one! More (real-time) rendering and practical info to come in part two!

Bonus: gold and white, color corrected dress:


Posted in Code / Graphics, Travel / Photography | Tagged , , , , , , , , , , | 3 Comments

Fixing screen-space deferred decals

Screen-space deferred decals are a very popular technique. There were so many presentations and blog posts about it that I will just list couple of them (just a first google search results page to be honest…) in no particular order:

Therefore I think it wouldn’t be exaggeration to call it “industry standard”.

The beauty of screen-space decals used together with deferred rendering is that they provide further “defer” of another part of the rendering pipeline – in this case of layered materials and in general – modifications to the rendered surface, both static and dynamic. Just like you can defer the actual lighting from generating the material properties (in deferred lighting / shading), you can do it as well with composite objects and textures.

You don’t need to think about special UV mapping, unwrapping, shader or mesh permutations, difficult and expensive layered material shaders, even more difficult pipelines for artists (how to paint 2 partially overlapping objects at once? How to texture it in unique way depending on the asset instance?) or techniques as complex and hard to maintain as virtual texturing with unique space parametrization.

Instead just render a bunch of quads / convex objects and texture it in the world space – extremely easy to implement (matter of hours/max days in complex, multi-platform engines), very easy to maintain (usually only maintenance is making sure you don’t break MRT separate blending modes and normals en/decoding in the G-Buffer) and easy for artists to work with. I love those aspects of screen-space decals and how easily they work with GBuffer (no extra lighting cost). I have often seen the deferred decals as one of important advantages of the deferred shading techniques and cons of the forward shading!

However, I wouldn’t write this post if not for a serious deferred screen-space decals problem that I believe every presentation failed to mention!

Later post edit: Humus actually described this problem in another blog post (not the original volume decals one). I will comment on it on one of later sections.

(Btw. a digression – if you are a programmer, researcher, artist, or basically any author of talk or a post – really, please talk about your failures, problems and edge cases! This is where 90% of engineering time is spent and mentioning it doesn’t make any technique any less impressive…).

Dirty screen-space decal problem

Unfortunately, in all those “simple” implementations presented in blog posts, presentations and articles there is a problem with the screen space decals that makes them in my opinion unshippable without any “fix” or hack in PS4/XboxOne generation of AAA games with realistic and complex lighting, materials and sharp, anisotropic filtering. Funnily enough, I found only one (!) screenshot in all those posts with such camera angle that presents this problem… Edge artifacts. This is a screenshot from the Saint Row: The Third presentation.

Problem with screen-space decals - edges

Problem with screen-space decals – edges. Source: Lighting and Simplifying Saints Row: The Third

I hope the problem is clearly visible on this screenshot – some pixels near the geometric edges perpendicular to the camera do not receive the decal properly and its background is clearly visible. I must add that in motion such kind of artifacts looks even worse. 😩 Seeing it in some other engine, I suspected at first many other “obvious” different reasons that cause edge artifacts – half-texel offsets, wrong depth sampling method, wrong UV coordinates… But the reason for this artifact is quite simple – screen space UV derivatives and the Texture2D.Sample/tex2DSample instruction!

Edit: there are other interesting problems with the screen-space / deferred decals. I highly recommend reading Sébastian Lagarde and Charles de Rousiers presentation about moving Frostbite to PBR in general (in my opinion the best and most comprehensive PBR-related presentation so far!), but especially section 3.3 about problems with decals and materials and lighting.

Guilty derivatives

The guilty derivatives – source of never ending graphics programmers frustrations, but also a solution to a problem unsolved otherwise. On the one hand a necessary feature for antialiasing of textures and the texturing performance, on the other hand a workaround with many problem of its own. They cause your quad overshading and inability to handle massive amounts of very small triangles (well, to be fair there are some more other reasons like vertex assembly etc.), they are automatically calculated for textures only in pixel shaders (in every other shader stage to use texturing you need to specify the LOD/derivatives manually), their calculations are imprecise and possibly low quality; they can cause many types of edge artifacts and are incompatible with jittered rasterization patterns (like flip-quad).

In this specific case, let’s have a look at how the GPU would calculate the derivatives, first by looking how per quad derivatives are generated in general.

Rasterized pixels

Rasterized pixels – note – different colors belong to the different quads.

In the typical rendering scenario and a regular rendering (no screen-space techniques) of this example small cylinder object, there would be no problem. Quad containing pixels A and B would get proper derivatives for the texturing, different quad containing pixels C and D would cause some overshading, but still have proper texture UV derivatives – no problem here as well (except for the GPU power loss on those overshaded pixels).

So how do the screen-space techniques make it not work properly? The problem lies within the way the UV texture coordinates are calculated and reprojected from the screen-space (so the core of this technique). And contrary to the triangle rasterization example, the problem with decal being rendered behind this object is not with the pixel D, but actually with the pixel C!

Effect of projecting reconstructed position into decal bounding box

Effect of projecting reconstructed position into decal bounding box

We can see on this diagram how UVs for the point C (reprojected from the pixel C) will lie completely outside the bounding box of the decal (dashed-line box), while the point D has proper UV inside it.

While we can simply reject those pixels (texkill, branch out with alpha zero etc. – doesn’t really matter), unfortunately they would contribute to the derivatives and mip level calculation.

In this case, the calculated mip level would be extremely blurry – the calculated partial derivative sees a difference of 1.5 in the UV space! As usually the further mip levels contain mip-mapped alpha as well then we end up with almost transparent alpha from the alpha texture or bright/blurred albedo and many kinds of different edge artifacts depending on the decal type and blending mode…

Other screen-space techniques suffering

Screen-space/deferred decals are not the only technique suffering from this kind of problems. Any kind of technique that relies on the screen-space information reprojected to the world space and used as the UV source for the texturing will have such problems and artifacts.

Edit: The problem of mip-mapping, derivatives and how screen-space deferred lighting with projection textures can suffer from it was described very well by Aras Pranckevičius.

Other (most common) examples include projection textures for the spot-lights and the cubemaps for the environment specular/diffuse lighting. To be honest, in every single game engine I worked with there were some workarounds for this kind of problems (sometimes added unconsciously 🙂 more about it in on of the next sections).

Not working solution – clamping the UVs

The first, quite natural attempt to fix it is to clamp the UVs – also for the discarded pixels, so that derivatives used for the mip-mapping are smaller in such problematic case. Unfortunately, it doesn’t solve the issue; it can make it less problematic of even completely fix it when the valid pixel is close to the clamped, invalid one, but it won’t work in many other cases… One example would be a having an edge between some rejected pixels close to U or V 0 and some valid pixels close to U or V 1; In this case still we get full mip chain dropped due to huge partial derivative change within this quad.

Still, if you can’t do anything else, it makes sense to throw in there a free (on most modern hardware) saturate instruction (or instruction modifier) for some of those rare cases when it helps…

Brutal solution – dropping mip-maps

I mentioned quite natural “solution” that I have seen in many engines and that is an acceptable solution for most of other screen space techniques – not using mip-maps at all. Replace your Sample with SampleLevel and the derivative and mip level problem is solved, right? 😉

This works “ok” for shadow maps – as the aliasing is partially solved by commonly used cascaded shadow mapping – further distances get lower resolution shadow maps (plus we filter some texels anyway)…

It is “acceptable” for the projection textures, usually because they are rendered only when being close to the camera because of a) high lighting cost b) per-scene and per-camera shot tweaking of lights.

It actually often works well with the environment maps – as lots of engines have Toksvig or other normal variance to roughness remapping and the mip level for the cubemap look-up is derived manually from the roughness or gloss. 🙂

However, mip mapping is applied on textures for a reason – removing aliasing and information in frequency higher than rasterizer can even reproduce. For things like shiny, normal mapped deferred decals like blood splats the effect of having no mip maps can be quite extreme and the noise and aliasing unacceptable. Therefore I wouldn’t use this as a solution in a AAA game, especially if deferred, screen-space decals are used widely as a tool for environment art department.

A middle ground here could be just dropping some further mip maps (for example keeping mips 0-3). This way one could get rid of extreme edge artifacts (when sampling completely invalid last mip levels) and still get some basic antialiasing effect.

Possible solution – multi-pass rendering

This is again a partial solution that would fix problems in some cases, but not in most. So the idea is to inject decal rendering in-between the object rendering with per-type object sorting. So for example “background”/”big”/”static” objects could be rendered first, decals projected on top of them and then other object layer.

This solution has many disadvantages – the first one is complication of the rendering pipeline and many unnecessary device state changes. The second one – the performance cost. Potential overshading and overdraw, wasting the bandwidth and ALU for pixels that will be overwritten anyway…

Finally, the original problem can be still visible and unsolved! Imagine a terrain with high curvature and projecting the decals on it – a hill with a valley background can still produce completely wrong derivatives and mip level selection.

Possible solution – going back to world space

This category of solutions is a bit cheated one, as it derives from the original screen-space decals technique and goes back to the world space. In this solution, artists would prepare a simplified version of mesh (in extreme case a quad!), map UV on it and use such source UVs instead of reprojected ones. Such UVs would mip-map correctly and won’t suffer from the edge artifacts.

Other aspects and advantages of the deferred decals technique would remain the same here – including possibility of software Z-tests and rejecting based on object ID (or stencil).


On the other hand, this solution is suitable only for the environment art. It doesn’t work at all for special effects like bullet holes or blood splats – unless you calculate source geometry and its UV on the CPU like in “old-school” decal techniques…

It also can suffer from wrong, weird parallax offset from the UV not actually touching the target surface – but in general camera settings in games never allow for extreme close ups so that it would be noticeable.

Still, I mention this solution because it is very easy on the programming side, can be good tool on the art side and actually works. It was used quite heavily in The Witcher 2 in the last level, Loc Muinne – as an easier alternative for messy 2nd UV sets and 2 layered, costly materials.

I’m not sure if those specific assets in this following screenshot used it, but such partially hand-made decals were used on many similar “sharp-ended” assets like those rock “teeth” on the left and right on the door frame in this level.


It is much easier to place them and LOD out quickly with distance (AFAIK they were present only together with a LOD 0 of a mesh) than creating multi-layered material system or virtual texture. So even if you need some other, truly screen-space decals – give artists possibility of authoring manual decal objects blended into the G-Buffer – I’m sure they will come up with great and innovative uses for them!

Possible solution – Forward+ decals

Second type of “cheated” solutions – fetch the decal info from some pre-culled list and apply it during the background geometry rendering. Some schemes like per-tile pre-culling like in Forward+ or clusterred lighting can make it quite efficient. It is hard for me to estimate the cost of such rendered decals – depends probably on how expensive are your geometry pixel shaders, how many different decals you have, are they bound on memory or ALU, can they hide some latency etc. One beauty of this solution is how easy it becomes to use anisotropic filtering, how easy is it to blend normals (blending happens before any encoding!), no need to introduce any blend states or decide what won’t be overwritten due to storage in alpha channel; Furthermore, it seems it should work amazingly well with MSAA.

Biggest disadvantages – complexity, need to modify your material shaders (and all of their permutations that probably already eat too much RAM and game build times), increased register pressure, difficulty debugging and potentially biggest runtime cost. Finally, it would work properly only with texture arrays / atlases, which add a quite restrictive size limitation…

Possible solution – pre-calculating mip map / manual mip selection

Finally, a most “future research” and “ideas” category – if you have played with any of them and have experience or simply would like to share your opinion about them, please let me know in comments! 🙂

So, if we a) want the mip-mapping and b) our screen-space derivatives are wrong, then why not compute the mip level or even the partial derivatives (for anisotropic texture filtering) manually? We can do it in many possible ways.

One technique could utilize in-quad communication (available on GCN explicitly or via tricks with many calls to ddx_fine / ddy_fine and the masking operations on any DX11 level hw) and compute the derivatives manually only when we know that pixels are “valid” and/or come from the same source asset (via testing distances, material ID, normals, decal mask or maybe something else). In case of zero valid neighbors we could fall back to using the zero mip level. In general, I think this solution could work in many cases, but I have some doubts about its temporal stability under camera movement and the geometric aliasing. It also could be expensive – it all depends on the actual implementation and used heuristics.

Another possibility is calculating the derivatives analytically during reconstruction, given the target surface normal and the distance from the camera. Unfortunately a limitation here is how to read the source mesh normals without the normal-mapping applied. If your G-Buffer layout has them lying somewhere (interesting example was in the Infamous: Second Son GDC 2014 presentation) around then great – they can be used easily. 🙂 If not, then IMO normal-mapped information is useless. One could try to reconstruct normal information from the depth buffer, but this is either not working in the way we would like it to be – when using simple derivatives (because we end up having exact same problem like the one we are trying to solve!) – or expensive when analyzing bigger neighborhood. If you have the original surface normals in G-Buffer though it is quite convenient and you can safely read from this surface even on the PC – as decals are not supposed to write to it anyway.

Post edit: In one older post Humus described a technique being a hybrid of the ones I mentioned in 2 previous paragraphs – calculating UV derivatives based on depth differences and rejection. It seems to work fine and probably is the best “easy” solution, though I would still be concerned by temporal stability of technique (with higher geometric complexity than in the demo) given that approximations are calculated in screen-space. All kinds of “information popping in and out” problems that exist in techniques like SSAO and SSR could be relevant here as well.

Post edit 2: Richard Mitton suggested on twitter a solution that seems both smart and extremely simple – using the target decal normal instead of surface normal and precomputing those derivatives in the VS. I personally would still scale it by per-pixel depth, but it seems like this solution would really work in most cases (unless there is huge mismatch of surface curvature – but then decal would be distorted anyway…). Anyway it seems it would work in most cases.

Final possibility that I would consider is pre-computing and storing the mip level or even derivatives information in the G-Buffer. During material pass, most useful information is easily available (one could even use CalculateLevelOfDetail using some texture with known UV mapping density and later simply rescale it to the target decal density – assuming that projection decal tangent space is at least somehow similar to the target tangent space) and depending on the desired quality it probably could be stored in just few bits. “Expensive” option would be to calculate and store the derivatives for potential decal anisotropic filtering or different densities for target triplanar mapping – but I honestly have no idea if it is necessary – probably depends what you intend to use the decals for.

This is the most promising and possibly cheap approach (many game GDC and Siggraph presentations proved that next-gen consoles seem to be quite tolerant to even very fat G-Buffers 🙂 ), but makes the screen-space decals less easy to integrate and use and requires probably more maintenance, editing your material shaders etc.

This idea could be extended way further and generalized towards deferring other aspects of material shading and I have discussed it many times with my industry colleagues – and similar approach was described by Nathan Reed in his post about “Deferred Texturing”. I definitely recommend it, very interesting and inspiring article! Is it practical? Seems to me like it could be, the first game developers who will do it right could convince others and maybe push the industry into exploring interesting and promising area. 🙂

Special thanks

I would like to thank Michal Iwanicki, Krzysztof Narkowicz and Florian Strauss for inspiring discussions about those problems and their potential solutions that lead to me writing this post (as it seems that it is NOT a solved problem and many developers try to workaround it in various ways).

Posted in Code / Graphics | Tagged , , , , , , , , | 5 Comments