In this blog post, I wanted to address something that I was thinking about for many years – since starting working on rendering in video games and with HDR workflows and having experience with various photographic techniques. For the title of the article, I can immediately give you an answer – “it depends on the content”, and you could stop reading it.🙂
However, if you are interested in what is localized tonemapping and why it’s an useful tool, I’ll try to demonstrate some related concepts using personal photographs and photographic experiments as well as screenshots from Sony Santa Monica’s upcoming new God of War and the demo we have shown this year at E3.
Note: all screenshots have some extra effects like bloom or film grain turned off – not to confuse and to simplify reasoning about them. They are also from an early milestone, so are not representative of final game quality.
Note 2: This is a mini-series. A second blog post will accompany this one and I will publish it around mid-week. It will cover some notes about dynamic range, numerical analysis and notes about viewing conditions.
Global exposure and tonemapping
Before even talking about tonemapping and the need of localized exposure, I will start with simple, global exposure setting.
Let’s start with a difficult scene that has some autoexposure applied:
For whole demo for lighting we used reference physical radiance values for natural light sources (sky, sun) and because of the scene taking place in a cave, you can already see that it has pretty big dynamic range.
I won’t describe here all the details of the used autoexposure system (pretty standard; though I might describe it in future blog post or in some presentation), but due to its center weighted nature it slightly underexposed lots of details in the shade at the bottom of the screen – but outdoors are perfectly exposed. In most viewing conditions (more about it in a second post) shadows completely lost all details, you don’t understand the scene and main hero character is indistinguishable…
Let’s have a look at the histogram for the reference:
Interesting observation here is that even though big area of the image is clearly underexposed, there is still some clipping in whites!
Let’s quickly fix it with an artist-exposed exposure bias:
And have a look at the histogram:
Hmm, details are preserved slightly better in shadows (though they still look very dark – as intended) and the histogram is more uniform, but whole outdoor area is completely blown out.
I have a confession to make now – for the purpose of demonstration I cheated here a bit and used linear tonemapping curve.🙂 This is definitely not something you would like to do, and for as why there are excellent presentations on the topic. I want to point to two specific ones:
First one is a presentation from a 2010 Siggraph course organized by Naty Hoffman – “From Scene to Screen” by Josh Pines. It describes how filmic curves were constructed and what was the reasoning behind their shape and in general – why we need them.
Second one is from GDC 2016 by Timothy Lottes “Advanced Techniques and Optimization of HDR Color Pipelines” and covers various topics related to displaying High Dynamic Range images.
So, this is the scene with correct, filmic tonemapping curve and some adjustable cross-talk:
And the histogram:
Much better! Quite uniform histogram and no blown whites (in game bloom that is additive would make whites white).
I made a gif of the difference before/after:
And the animation of all three histograms:
Much better! Few extra stops of dynamic range, lots of details and saturation preserved in bright areas. However, histogram still contains many completely black pixels, there is a large spike in lower 10th percentile and depending on the viewing conditions, scene might still be too dark…
Gamma / contrast settings
Why not just reduce the dynamic range of the image? Raising linear pixel values to some power and rescaling are equivalent of logarithmic EV space scaling and shifting and can give result that looks like that:
Definitely all details are visible, but does the scene look well? In my opinion not; everything is milky, washed out, boring and lacks “punch”. Furthermore, both environments and characters started to look chalky. Due to perceptual character of per-channel gamma adjustments, not only contrast, but also saturation is lost. Even histogram shows the problem – almost no whites nor blacks in the scene, everything packed into grays!
Check this gif comparison:
Does this problem happen in real life / photography?
Yes, obviously it does! Here is a photo from Catalina Island – you can notice on the left how camera took an underexposed picture with extremely dark shadows with zero detail – and on the right how I corrected it to feel more perceptually correct and similar to how I saw the scene.
So can we do anything about it? Yes, we can! And I have teased with the photograph above what is the solution.
But first – some question that probably immediately pop up in a discussion with lighting artists.
Why not just change the lighting like they do in movies?
The answer often is – yes, if you can do it right, just adjust lighting in the scene to make dynamic range less problematic. We interviewed recently many lighters coming from animation and VFX and usually as a solution for large scene dynamic range they mention just adding more custom lights and eyeballing it until it looks “good”.
Obviously by using the word right in my answer, I kind of dodged the question. Let me first explain what it means in real-life photography and cinematography with some random youtube tutorials I have found on the topic. It means:
Pieces of some material that partially transmits light, but completely diffuses the directionality and source position of lighting. Equivalent of placing large area light in rendered scene and masking out original light.
Simply a surface that bounces lighting and boosts lighting on shadowed surfaces and ones not facing the light direction.
In a game scene you could either place white material that would produce more GI / bounced lighting, or simply in a cinematic place a soft (area) fill light.
Additional, artificial light sources (fill flash, set lights etc.)
This is pretty obvious / straightforward. Movie sets and games with cinematic lighting have tons of them.
So as you can see – all those techniques apply to games! With constrained camera, camera cuts, cinematics etc. you can put in there fill lights and area lights and most games (with enough budget) do that all the time.
This is also what is possible in VFX and animation – manually adding more lights, fixing and balancing stuff in composition and just throwing more people at the problem…
Note: I think this is good, acceptable solution and in most cases also the easiest! This is where my answer “it depends” for the question in the title of this article comes from. If you can always add many artificial, off-screen light sources safely and your game content allows for it and you have great lighters – then you don’t need to worry.
Is it always possible in games and can get us good results?
No. If you have a game with no camera cuts or simply need perfect lighting in conditions with 100% free camera, you cannot put invisible light source in the scene and hope it will look well.
On the other hand, one could argue that there is often place for grounded light sources (torch, fire, bulbs… name it) and sure; however as often they can make no sense in the context and level design design of the scene, or you might have no performance budget for them.
Hacking Global Illumination – just don’t!
Some artists would desperately suggest “let’s boost the GI!” “let’s boost bounced lighting!” “let’s boost just the ambient/sky lighting intensity!” and sometimes you could get away with that on previous generation of consoles and non-PBR workflows, but in PBR pipeline, it’s almost impossible not to destroy your scene this way. Why? Large family of problems it creates is balance of specular and diffuse lighting.
If you hack just diffuse lighting component (by boosting the diffuse GI), your objects will look chalky or plastic and your metals will be too dark; other objects sometimes glow in the dark. If you boost also indirect speculars, suddenly under some conditions objects will look oily and/or metallic; your mirror-like glossy surfaces will look weird and lose sense of grounding.
Finally, this is not only GI specific, but applies to any hacks in PBR workflow – when you start hacking sun/sky/GI intensities, you lose ability to quickly reason about material responses and the lighting itself and debugging them – as you can’t trust what you see and many factors can be the source of the problem.
How photography deals with the problem of too large dynamic range when operating with natural light?
This is a very interesting question and my main source of inspiration to the solution of this problem. Especially in the film / analog era, photographers had to know a lot about dynamic range, contrast, various tonemapping curves. Technique and process were highly interleaved with artistic side of photography.
One of (grand)fathers of photography, Ansel Adams created so-called Zone system.
I won’t describe it in detail here, but it is very similar to many principles that we are used to – mapping middle gray, finding scene dynamic range, mapping it to media dynamic range etc.
Fascinating part of it is the chemical/process part of it:
Picking correct film stock (different films have different sensitivity and tonemapping curve shape), correct developer chemical, diluting it (my favourite developer, Rodinal can deliver totally different contrast ratios and film acuity/sharpness depending on the dilution), adjusting development time or even frequency of mixing the developed film (yes! 1 rotation of development tank a minute can produce different results than 1 rotation every 30 seconds!).
Manual localized exposure adjustment
This all is interesting, but also in global tonemapping / per-image, global process domain. What photographers had to do to adjust exposure and contrast locally, was tedious process called dodging and burning.
It meant literally filtering or boosting light during print development. As film negatives had very large dynamic range, it made it possible to not only adjust exposure/brightness, but recover lots of details in otherwise overblown / too dark areas.
An easy alternative that works just great for landscape photography is using graduated filters:
Or even more easily, by using polarizer (darkens and saturates the sky and can cancel out specular light / reflections on e.g. water).
Fortunately in digital era, we can do it much easier with localized adjustment brushes! This is not very interesting process, but it’s extremely simple in software like Adobe Lightroom. Some (stupid) example of manually boosting exposure in shadows:
As localized adjustment brush with exposure is only exposure addition / linear space multiplication (more about it in second post in the series!), it doesn’t affect contrast in modified neighborhood.
It is worth noting here that such adjustment would be probably impossible (or lead to extreme banding / noise) with plain LDR bmp/jpeg images. Fortunately, Adobe Lightroom and Adobe Camera Raw (just like many other similar deticated RAW processing format) operate on RAW files that are able to capture 12-16 exposure stops of dynamic range with proper detail! Think of them as of HDR files (like EXR), just stored in a compressed format and containing data that is specific to the input device transform.
This is not topic of this post, but I think it’s worth mentioning that on God of War we implemented similar possibility for lighting artists – in form of 3D shapes that we called “exposure lights”. Funnily they are not lights at all – just spherical, localized exposure boosters / dimmers. We used dimming possibility in for example first scene of our demo – Kratos reveal, to make him completely invisible in the darkness (there was too much GI🙂 ) and we use brightness boosting capabilities in many scenes.
Automatic localized adjustments – shadows / highlights
Manual localized exposure adjustments are great, but still – manual. What if we could do it automatically, but without reducing whole image contrast – so:
b) when necessary
c) preserving local contrast?
Seems like Holy Grail of exposure settings, but let’s have a look at tools already at photographers/artists disposal.
Enter… Shadows / Highlights! This is an image manipulation option available in Adobe Photoshop and Lightroom / Camera Raw. Let’s have a look at some image with normal exposure, but lots of bright and dark areas:
We can boost separately shadows:
(notice how bright the trees got – with slight “glow” / “HDR-look” (more about it later).
Notice more details and saturation in the sky.
And finally, both applied:
What is really interesting, is that it is not a global operator and doesn’t only reshape exposure curve. It’s actually a contrast-preserving, very high quality localized tonemapping operator. Halo artifacts are barely visible (just some minor “glow”)!
Here is an extreme example that hopefully shows those artifacts well (if you cannot see them due to small size – open images in a separate tab):
Interestingly, while ACR/Lightroom HDR algorithm seems to work great until pushed to the extreme, same Shadows/Highlights looks quite ugly in Photoshop in extreme settings:
Aaaargh, my eyes!🙂 Notice halos and weird, washed out saturation.
Is the reason only less information to work with (bilateral weighting in HDR can easily distinguish between of -10EV vs -8EV while 1/255 vs 2/255 provides almost no context/information?) or a different algorithm – I don’t know.
Actual algorithms used are way beyond scope of this post – and still a topic I am investigating (trying to minimize artifacts for runtime performance and maximize image quality – no halos), but I was playing with two main categories of algorithms:
- Localized exposure (brightness) adjustments, taking just some neighborhood into account and using bilateral weighting to avoid halos. I would like to thank here our colleagues at Guerilla Games for inspiring us with an example of how to apply it in runtime.
- Localized histogram stretching / contrast adjustment – methods producing those high structure visibility, oversaturated, “radioactive” pictures.
There are obviously numerous techniques and many publications available – sadly not many of them fit in a video game performance budget.
In “God of War”
Enough talking about photography and Adobe products, time to get back to God of War!
I implemented basic shadows/highlights algorithm with artist tweakable controls and trying to match behavior of Lightroom. First screenshot shows a comparison of “shadows” manipulation and regular, properly tonemapped screenshot with a filmic tonemapping curve.
I set it to some value that is relatively subtle, but still visible (artists would set it from more subtle settings to more pronounced in gameplay-sensitive areas). Now the same with highlights options:
One thing that you might notice here is haloing artifacts – they result from both relatively strong setting as well as some optimizations and limitations of the algorithm (working in lower/partial resolution).
Finally, with both applied:
As I mentioned – here it is shown in slightly exaggerated manner and showing artifacts. However, it’s much better than regular “gamma” low contrast settings:
Histogram shows the difference – while gamma / contrast operator tends to “compact” the dynamic range and pack it all in midtones / grays, shadows/highlights operations preserve local contrast, saturation and some information about darkest and brightest areas of the image.
Why localized exposure preserves contrast and saturation? Main difference is that gamma in logarithmic space becomes scale, scaling whole histogram, while exposure/scale becomes just a linear shift (more about it in part 2) and shifts under / over exposed parts with same histogram shape into visible range.
You can check the final image (a bit more subtle settings) here:
To sum up – I don’t think that problems of exposure and dynamic range in real time rendering are solved. Sometimes scenes rendered using realistic reference values have way too large dynamic range – just like photographs.
We can fix it with complicated adjustments of the lighting (like they do on movie sets), some localized exposure adjustments (in 3D “exposure lights”) or using simple “procedural” image space controls of shadows/highlights.
Possible solutions depends heavily on the scenario. For example – if you can cut the camera, you have many more options than when is it 100% free and not constrained with zero cuts. It also depends how much budget do you have – both in terms of milliseconds to spend on extra lights as well as in terms of lighting artists time.
Sometimes a single slider can make scene look much better and while localized exposure / localized tonemapping can have its own problems, I recommend adding it to your artists’ toolset to make their lives easier!
http://renderwonk.com/publications/s2010-color-course/ SIGGRAPH 2010 Course: Color Enhancement and Rendering in Film and Game Production
http://gpuopen.com/gdc16-wrapup-presentations/ “Advanced Techniques and Optimization of HDR Color Pipelines”, Timothy Lottes.