Major C#/.NET graphics framework update + volumetric fog code!

As I already promised too many times, here comes major CSharpRenderer framework update!

As always, all code available on GitHub.

Note that the goal is still the same – not to write most beautiful or fast code, but to provide a prototype playground / framework for hacking and having fun with iteration times approaching 0. :) It still will undergo some major changes.

Apart from volumetric code as example for my Siggraph talk (which is not in perfect shape code quality wise – it is supposed to be a quickly written demo of this technique; note also that this is not the code that was used for shipping game, it is just a demo; original code had some NDAd and console specific optimizations), other major changes cover:

“Global” shader defines visible from code

You can define some constant as “global” one in shader and immediately have it reflected in C# side after changing / reloading. This way I removed some data / code duplication and potential for mistakes.

Example:


// shader side

#define GI_VOLUME_RESOLUTION_X 64.0 // GlobalDefine

// C# side

m_VolumeSizeX = (int)ShaderManager.GetUIntShaderDefine("GI_VOLUME_RESOLUTION_X");

Derivative maps

Based on old but excellent post by Rory Driscoll. I didn’t see much sense in computing tangent frames in mesh preprocessing for needs of such simple framework. I used “hack” of using normal maps as derivative map approximation – doesn’t really care in such demo case.

“Improved” Perlin noise textures + generation

Just some code based on state of the art article from GPU Pro by Simon Green. Used in volumetric fog for some animated, procedural effect.

Very basic implementation of BRDFs

GGX Specular based on a very good post about optimizing it by John Hable.

Note that lighting code is a bit messy now, its major clean-up is my next task.

 

Minor changes added are:

  • UI code clean-up and dynamic UI reloading/recreating after constant buffer / shader reload.
  • Major constants renaming clean-up.
  • Actually fixing structured buffers.
  • Some simple basic geometric algorithms I found useful.
  • Adding shaders to project (actually I had it added, have no idea why it didn’t get in the first submit…).
  • Some more easy-to-use operations on context (blend state, depth state etc.).
  • Simple integers supported in constant buffer reflection.
  • Other type of temporal AA – accumulation based, trails a bit – I will later try to apply some ideas from excellent Epic UE4 AA talk.
  • Time-delta based camera movement (well, yeah…).
  • Fixed FPS clamp – my GPU was getting hot loud. :)
  • More use of LUA constant buffer scripting – it is very handy and serves purpose very well.
  • Simple basis for “particle” rendering based on vertex shaders and GPU buffer objects.
  • Some stupid animated point light.
  • Simple environment BRDF approximation by Dimitar Lazarov from Black Ops 2

Future work

Within next few weeks I should update it with:

  • Rewriting post-effects, tone-mapping etc.
  • Adding GPU debugging
  • Improving temporal techniques
  • Adding naive screens-pace reflections and an env cube-map
  • Adding proper area light support (should work super-cool with volumetric fog!)
  • Adding local lights shadows
Posted in Code / Graphics | Tagged , , , , , , , , | 1 Comment

Siggraph 2014 talk slides are up!

As promised during my talk, I have just added Siggraph 2014 Advances in the Real-Time rendering slides, check them out at my Publications page. Some extra future ideas I didn’t manage to cover in time are in bonus slides section, so be sure to check it out.

They should be also soon online at “Advances in the Real-Time” web page. When they land there, check them the whole page out as there was lots of amazingly good and practical content in this course this year! Thanks again to Natalya Tatarchuk for organizing the whole event.

…Also I promised that I will release some source code soon, so stay tuned! :)

Posted in Code / Graphics | Tagged , , , , , , , | Leave a comment

Voigtlander Nokton Classic 40mm f1.4 M on Sony A7 Review

As I promised, my delayed review of Voigtlander Nokton Classic 40mm f1.4 M used on Sony Alpha A7. First I’m going to explain some “mysterious” (lots of questions in the internet!) aspects of this lens.

Why 40mm?

So, first of all – why such weird focal length like 40mm, while there are tons of great M-mount 35mm and 50mm lenses? :)
I’ve always had problems with “standard” and “wide-standard” focal lengths. Honestly, 50mm feels too narrow. It’s great for neutral upper-body or full-body portraits and shooting in open-door environments, but definitely limiting in interiors and for situational portraits.
In theory, it was supposed to be a “neutral” focal length, similar to human perception of perspective, but is a bit narrower. So why so many 50mm lens and they are considered standard? Historical reasons and optics – they are extremely easy to produce and correct any kinds of optical problems (distortion, aberration, coma etc.) and require less optical elements than other kinds of lenses to achieve great results.
On the other hand, 35mm usually catches too much environment and photos get a bit too “busy”, while it’s still not true wide angle lens for amazing city or landscape shots.
40mm feels just right as a standard lens. Lots of people recommend against 40mm on rangefinders, as Leica and similar don’t have any framings for 40mm. But on digital full frame mirrorless with great performing EVF? No problem!
Still, this is just personal preference. You must decide on your own if you agree, or maybe prefer something different. :) My advice on picking focal lengths is always – spend a week and take many photos in different scenarios using cheap zoom kit lens. Later check the EXIF data and check what kinds of focal lengths you used for the photos you enjoy the most.
Great focal length for daily "neutral" shooting.

Great focal length for daily “neutral” shooting.

What does it mean that this lens is “classic”?

There is lots of bs in the internet about “classic” lens design. Some people imply that it means that lens is “soft in highlights”. Obviously this makes no sense, as sharpness is not a function of brightness – either lens is soft or sharp. It can mean transmittance problems wrongly interpreted, but what’s the truth?
Classic design usually means design of lenses relating to historical designs of earlier XX century. Lenses were designed this way before introduction of complex coating and many low-dispersion / aspherical elements. Therefore, they have relatively lower number of elements – as without modern multi-coating and according to Fresnel law on every contact point between glass and air there was light transmission loss and light got partially reflected. Lack of proper lens coating resulted not only in poor transmission (less light getting to film / camera sensor) and lower contrast, but also in flares and various other artifacts coming from light bouncing inside the camera. Therefore number of optical elements and optical groups was kept a bit lower. With lower number of optical elements it is impossible to fix all lens problems – like coma, aberration, dispersion or even sharpness.
“Classic” lenses were also used with rangefinders that had quite large close-focusing range (usually 1m). All this disadvantages had a good side effect – lenses designed this way were much smaller.
And while Voigtlander Nokton Classic bases on “classic” lens design, it has modern optical element coating, a bit higher number of optical elements and keeps very small size and weight while fixing some of those issues.
Optical elements - notice how close pieces of glass are together (avoiding glass/air contact)

Optical elements – notice how close pieces of glass are together (avoiding glass/air contact)

What’s the deal with Single / Multi Coating?

I mentioned the effect of lens coating in previous section. For unknown reason, Voigtlander decided to release both truly “classic” version with single, simple coating and multi-coated version. Some websites try to explain it that a) single coating is cheaper b) some contrast and tranmission loss is not that bad when shooting on B&W film c) flaring can be desired effect. I understand this reasoning, but if you shoot anything in color, stick to the multi-coated version – no need to lose any light!
Even with Multi-coating, flaring at night can be a bit strong.

Even with Multi-coating, flaring of light sources at night can be a bit strong. Notice quite strong falloff and small coma in corners.

Lens handling

Love how classic and modern styles work great together on this camera/ lens combination

Love how classic and modern styles work great together on this camera / lens combination

Lens handles amazingly well on Sony A7. With EVF and monitor it’s really easy to focus even at f/1.4 (although takes a couple of days of practicing). Aperture ring and focus ring work super smooth. Size is amazing (so small!) even with adapter – advantage of M-Mount – lenses for M-mount were designed to have small distance to film. Some people mention problems on Sony A7/A7R/A7S with purple coloring on the corners on wider-angle Voigtlander lenses due to grazing angle between light and sensor – fortunately that’s not the case with Nokton 40mm 1.4.
Only disadvantage is that sometimes while eye at EVF i “lose” the focus tab and cannot locate it. Maybe it takes some time to get used to it?
In general, it is very enjoyable and “classic” experience, and it’s fun just to walk around with camera with Nokton 40mm on.

Image quality

I’m not a pixel-peeper and won’t analyze all micro-aspects on crop images or measure. Just conclusions from every day shooting. The lens I have (remember that every lens copy can differ!) is very sharp – has quite decent sharpness even at f/1.4 (although it is extremely easy with only slight movement to lose focus…). Performance is just amazing at night – great lens for wide-opened f/1.4 night photos – you don’t have to pump ISO or fight with long shutter speed – just enjoy photography. :)
Pin-sharp with nice, a bit busy bokeh

Pin-sharp at f/1.4 with nice, a bit busy bokeh

Higher apertures = corner to corner sharpness

Higher apertures = corner to corner sharpness

Bokeh is a bit busy, gets “swirly” and squashed, sometimes can be distracting – but I like it this way. Depends on personal preferences. At f/1.4 with 40mm it can almost melt down the backgrounds. Some people complain about purple fringing (spectrochromatism) of bokeh – something I wrote about in my post about Bokeh scatter DoF. I didn’t notice it on almost any of my pictures, on one I removed it with one click in Lightroom – definitely not that bad.
Bokeh

Bokeh

Even with mediocre adapters can't complain about MF lens handling

At larger apertures bokeh gets quite “swirly”. Still lots of interesting 3D “pop”.

There is definitely some light fall-off at f/1.4 and f/2.0, but I never mind those kind of artifacts. Distortion is negligible in regular shooting – even architecture.
General contrast and micro-contrast is nice and there is this “3D” look to many photos. I really don’t understand complaints and see big difference compared to “modern” designed lenses – but I never used latest Summicron/Summilux so maybe I haven’t seen everything. ;)
Color definition is very neutral – no visible problematic coloring.
Performance is a bit worse in corners – still quite sharp, but some visible coma (squashing of image in plane perpendicular to radius).
Some fall-off and coma in corners. Still pretty amazing night photo - Nokton is truly deserved name.

Some fall-off and coma in corners. Still pretty amazing night photo – Nokton is truly deserved name.

Unfortunately, even with Multi-Coating, there is some flaring at night from very bright light sources. Fortunately I didn’t notice any ghosting that often comes with it.

Disadvantages

So far I have one, biggest problem with this lens – close focus range of 0.7m. It rules out many tricks with perspective on close-ups, any kind of even semi-macro photography (photos of food while at restaurant). While at f/1.4 you could have amazingly shallow DoF and wide bokeh, that’s not the case here, as you cannot set focus closer…  It can even be problematic for half-portraits. Big limitation and pity, otherwise the lens would be perfect for me – but on the other hand such focus range contributes to smaller lens size. As always – you cannot have only advantages (quality, size&weight, aperture and in this case close-focus range). Some Leica M-lenses have focus range of 1m – I don’t imagine shooting with such lenses…

Recommendations

Do I recommend this lens? Oh yes! Definitely great buy for any classic photography lover. You can use it on your film rangefinder (especially if you own Voigtlander Bessa) and on most of digital mirrorless camera. Great image quality, super pleasant handling, acceptable price – if you like 40mm and fast primes, then it’s your only option. :)
DSC00331
DSC00270
DSC00292
DSC00226
Image | Posted on by | Tagged , , , , , , , , , , , | 6 Comments

Poisson disk/square sampling generator for rendering

I have just submitted onto GitHub small new script – Poisson-like distribution sampling generator suited for various typical rendering scenarios.

Unlike other small generators available it supports many sampling patterns – disk, disk with a central tap, square, repeating grid.

It outputs ready-to-use (and C&P) patterns for both hlsl and C++ code. It plots pattern on very simple graphs.

Generated sequence has properties of maximizing distance for every next point from previous points in sequence. Therefore you can use partial sequences (for example only half or a few samples based on branching) and have proper sampling function variance. It could be useful for various importance sampling and temporal refinement scenarios. Or for your DoF (branching on CoC).

Edit: I added also an option to optimize sequences for cache locality. It is very estimate, but should work for very large sequences on large sampling areas.

Usage

Just edit the options and execute script: “python poisson.py“. :)

Options

Options are edited in code (I use it in Sublime Text and always launch as script, so sorry – no commandline parsing) and are self-describing.

# user defined options
disk = False # this parameter defines if we look for Poisson-like distribution on a disk (center at 0, radius 1) or in a square (0-1 on x and y)
squareRepeatPattern = True # this parameter defines if we look for "repeating" pattern so if we should maximize distances also with pattern repetitions
num_points = 25 # number of points we are looking for
num_iterations = 16 # number of iterations in which we take average minimum squared distances between points and try to maximize them
first_point_zero = disk # should be first point zero (useful if we already have such sample) or random
iterations_per_point = 64 # iterations per point trying to look for a new point with larger distance
sorting_buckets = 0         # if this option is > 0, then sequence will be optimized for tiled cache locality in n x n tiles (x followed by y) 

Requirements

This simple script requires some scientific Python environment like Anaconda or WinPython. Tested with Anaconda.

Have fun sampling! :)

Posted in Code / Graphics | Tagged , , , , , , , , | Leave a comment

Sony A7 review

Introduction

This is a new post for one of my favourite “off-topic” subjects – photography. I just recently (under 2 weeks ago) bought Sony A7 and wanted to share some my first impressions and write a mini review.

Why did I buy a new piece of photo hardware? Well, my main digital camera since 3-4 years was Fuji FinePix X100. I also owned some Nikon 35mm/FF DSLRs, but since my D700 (that I bought used cheaply with already big shutter counter value) got broken beyond repair I bought D600, I almost didn’t use Nikon gear. D600 is a terrible camera with broken AF, wrong metering (exposes +/- 1EV at random, lots of PP at home) and tons of other problems and honestly – I wouldn’t recommend it to anyone and I don’t use it anymore.

With Fuji X100 I share hate & love relationship. It has lots of advantages. Great image quality for such tiny size and APS-C sensor. It is very small, looks like a toy camera (serious advantage if you want to travel into not really safe areas or simply don’t want to attract too much attention, just enjoy taking photos). Bright f/2.0 lens and interesting focal length (one good photographer friend of mine told me once that there are no interesting photos taken with focal lengths of more than 50mm and while it was supposed to be a joke, I hope you can get the point). Finally nice small built-in flash and excellent fill light flash mode working great with leaf shutter and short sync times – it literally saved thousands of portraits in bright sunlight and other holiday photos. On the other hand, it is slow, has lots of quirks in usage (why do I need to switch to macro mode to take a regular situational portrait?!), slow and inaccurate AF (need to try to take a photo couple times, especially in low light…), it’s not pin-sharp and fixed 35mm focal length equivalent can be quite limiting – too wide for standard shooting, too narrow for wide angle shots.

Since at least a year I was looking around for alternatives / some additional gear and couldn’t find anything interesting enough. I looked into Fuji X100s – but simply a bit better AF and sensor wouldn’t justify such big expense + software has problems with X-Trans sensor pixel color reconstruction. I read a lot about Fuji X-series mirror-less system, but going into a new system and buying all the new lenses is a big commitment – especially on APS-C. Finally quite recent option is Sony RX-1. It seemed very interesting, but Angelo Pesce described it quite well – it’s a toy (NO OVF/EVF???).

Sony A7/A7R and recent A7S looked like interesting alternatives and something that would compete with famous Leica so I looked into it and after couple weeks of research I decided to buy the cheapest and most basic one – A7 with the kit lens. What do I need kit lens for? Well, to take photos. I knew that its IQ wouldn’t be perfect, but it’s cheap, not very heavy and it’s convenient to have one just in case – especially until having completed your target lens set. After few days of extensive use (a weekend trip to NYC, yay!) I feel like writing a mini review of it, so here we go!

Hero of this report - no, not me! Sony A7 :)

Hero of this report – no, not me & sunburn! Sony A7 :) Tiny and works great.

I tested it with the kit lens (Sony FE 28-70mm f/3.5-5.6 OSS), Nikkor 50mm 1.4D and Voigtlander Nokton 40mm 1.4.

DSC00353

What I like about it

Size and look

This one is pretty obvious. Full-frame 35mm camera sized smaller than many mirrorless APS-C or famous Leica cameras! Very light, so I just throw it in a bag or backpack. My neck doesn’t hurt even after whole day of photo shooting. Discrete when doing street photography. Nice style that is kind of blend between modern and retro cameras. Especially with M-mount lenses on – classic look and compact size. Really hard to beat in this area. :)

Love how classic and modern styles work great together on this camera

Love how classic and modern styles work great together on this camera

Image quality

Its full-frame sensor has amazing dynamic range on low ISOs. 24MP resolution – way too much for anyone except for pros taking shots for printing on billboards, but useful for cropping or reducing high-ISO noise when downsizing. Very nice built-in color profiles and aesthetic color reproduction – I like them much better than Adobe Lightroom ones. I hope I don’t sound like audiophiles, but you really should be able to see the effect of full-frame and large pixel size on the IQ – like there is “medium-format look” even with mediocre scans, I believe there is “full-frame look” better than APS-C or Micro 4/3.

Subtle HDR from a single photo? No problem with Sony A7 dynamic range!

Subtle HDR from a single photo? No problem with Sony A7 dynamic range.

IQ and amount of detail is amazing  - even on MF, shot with Voigtlander Nokton 40mm f 1.4

IQ and amount of detail is amazing – even on MF, shot with Voigtlander Nokton 40mm f 1.4

EVF and back display

Surprisingly pleasant in use, high resolution and dynamic range and fast. I was used to Fuji X100 laggy EVF (still useful at night or when doing precise composition) and on Sony A7 I feel huge difference. Switches between EVF and back display quite quickly and eye sensor works nice. Back display can be tilted and I used it already couple times (photos near the ground or above my head), a nice feature to have.

Manual focusing and compatibility with other lenses

This single advantage is really fantastic and I would buy this camera just because of that. Plugging in Voigtlander or Nikon lenses was super easy, camera automatically switched into manual focus mode and operated very well. Focusing with magnification and focus-assist is super easy and really pleasant. It feels like all those old manual cameras, same pleasure of slowly composing, focusing, taking your time and enjoying photography – but much more precise. With EVF and DoF preview always on you constantly think about DoF and its effect on composition, what will be sharp etc. To be honest, I never took so sharp and photos in my life – almost none deleted afterwards. So you spend more time on photo taking (it may be not acceptable for your friends or strangers asked to take a photo of you), but much less in post-processing and selection – again, kind of back to photography roots.

My wife photo shot using Nikkor 50mm f/1.4D - no AF gave me such precise results...

Photo of my wife. It was photo shot using Nikkor 50mm f/1.4D and MF – no AF ever gave me so precise results…

I like the composition and focus in this photo - shot using manual focus on Nikkor 50mm 1.4D

I like the composition and focus in this photo – shot using manual focus on Nikkor 50mm 1.4D

Quality of kit lens and image stabilization

I won’t write any detailed review of the kit lens – but it’s acceptably sharp, nice micro-contrast and color reproduction, you can correct distortion and vignetting easily in Lightroom and it’s easy to take great low-light photos with relatively longer exposure times due to very good image stabilization. AF is usually accurate. While I don’t intend to use this lens a lot, I have much more fun with primes, I will keep it in my bag for sure and it proves itself useful. Only downside is size (zoom FF lenses cannot be tiny…) – because it is surprisingly light!

Hand held photo taken using lens kit at night - no camera shake!

Hand held photo taken using lens kit at night – no camera shake!

Speed and handling

Again probably I feel so good about Sony A7 speed and handling because of moving from Fuji X100 – but ergonomics are great, it is fast to use and reacts quickly. Only disadvantage is how long it takes default photo preview and EVF showing image feed again – 2s is minimum time to select from a menu – way too long for me. There are tons of buttons configured very wisely by default – changing ISO or exposure compensation without taking your eye off the camera is easy.

Various additional modes

Pro photographer probably doesn’t need any panorama mode, or night mode that automatically combines many frames to decrease noise / camera shake / blur, but I’m not a pro photographer and I like those features – especially panoramas. Super easy to take, decent quality and no need to spend hours post-processing or relying on stitch apps!

In-camera panorama image

In-camera panorama image

What I don’t like

Current native lenses available

Current native FE (“full frame E-mount”) lens line-up is a joke. Apart from kit lens there are only 2 primes (why 35mm is only f/2.8 when so big?) and 2 zoom lenses – all definitely over-priced and too large. L There are some Samyang/Rokinon manual focus lenses available (I played a bit with 14mm 2.8 on Nikon and it was cheap and good quality – but way too large). There are rumors of many first and third party (Zeiss, Sigma, maybe Voigtlander) lenses to be announced at Photokina so we will see. For now one has to rely on adapters and manual focusing.

Lack of built-in or small external flash

A big problem for me. I very often use flash as fill light and here it’s not possible. L Smallest Sony flash HVL-F20AM is currently not available (and not so small anyway).

Not too bad photo - but would have been much better with some fill light from a flash...

Not too bad photo – but would have been much better with some fill light from a flash… (ok, I know – would be difficult to sync without ND filters / leaf shutter :) )

What could be better but is not so bad

Accessories

System is very young so I expect things to improve – but currently availability of first or third party accessories (flashes, cases, screen protectors etc.) is way worse than for example Fuji X-series system. I hope things to change in the next months.

Not the best low light behavior

Well, maybe I’m picky and expected too much as I take tons of night photos and couple years ago it was one of the reasons I wanted to buy a full-frame camera. :) But for a 2014 camera A7 high ISO quality degradation of detail (even in RAW files! they are not “true” RAW sensor feed…), color and dynamic range is a bit too high. A7S is much better in this area. Also the AF behavior is not perfect in low light…

Photo taken at night with Nikkor 50mm and f/1.4 - not too bad, but some grain visible and detail lost

Photo taken at night with Nikkor 50mm and f/1.4 – not too bad, but some grain visible and detail loss

Not best lens adapters

The adapters I have for Nikon and M-mount are OK. Their built quality seems acceptable and I didn’t see any problems yet. But they are expensive – 50-200 dolars for a piece of metal/plastic? It would be also nice to have some information in EXIF – for example option to manually specify set focal length or detect aperture? Also Nikon/Sony A-mount/Canon adapters are too big (they cannot be smaller due to design of the lens – focal plane distance must match DSLRs) – what’s the point of having small camera with big, unbalanced lenses?

Even with mediocre adapters can't complain about MF lens handling

Even with mediocre adapters can’t complain about MF lens handling and IQ

Kit zoom and tiny Nikkor 50mm 1.4D with adapter are too big... M-mount adapter and Voigtlander lens are much smaller and more useful.

Kit zoom and tiny Nikkor 50mm 1.4D with adapter are too big… M-mount adapter and Voigtlander lens are much smaller and more useful.

Photo preview mode

I don’t really like how magnification button is placed and that by default it magnifies a lot (to 100% image crop level). I didn’t see any setting to change it – I would expect progressive magnification and better button placement like on Nikon camera.

Wifi pairing with mobile

I don’t think I will use it a lot – but sometimes it could be cool for remote control. In such case I tried to set it up and it took me 5mins or so to figure it out – definitely not something to do when willing to take a single nice photo with your camera placed on a bench at night.

 

What’s next?

In the next couple days (hopefully before the Siggraph as after I have a lot more to write!) I promise I will add in separate posts:

  • More sample photos from my NYC trip
  • Voigtlander Nokton 40mm f/1.4 mini review – I’m really excited about this lens and it definitely deserves a separate review!

So stay tuned!

Image | Posted on by | Tagged , , , , , , , , , , , | 2 Comments

Hair rendering trick(s)

I didn’t really plan to write this post as I’m quite busy preparing for Siggraph and enjoying awesome Montreal summer, but after 3 similar discussion with friends developers I realized that the simple hair rendering trick I used during the prototyping stage at CD Projekt Red for Witcher 3 and Cyberpunk 2077 (I have no idea if guys kept that though) is worth sharing as it’s not really obvious. It’s not about hair simulation or content authoring, I’m not really competent to talk about those subjects and it’s really well covered in AMD Tress FX or nVidia HairWorks (plus I know that lots of game rendering engineers work on that topic as well), so check them out if you need awesome looking hair in your game. The trick I’m going to cover is to improve quality of typical alpha-tested meshes used in deferred engines. Sorry, but no images in this post though!

Hair rendering problems

There are usually two problems associated with hair rendering that lot of games and game engines (especially deferred renderers) struggle with.

  1. Material shading
  2. Aliasing and lack of transparency

First problem is quite obvious – hair shading and material. Using standard Lambertian diffuse and Blinn/Blinn-Phong/microfacet specular models you can’t get proper looks of hair, you need some hair specific and strongly anisotropic model. Some engines try to hack some hair properties into the G-Buffer and use branching / material IDs to handle it, but as recently John Hable wrote in his great post about needs for forward shading – it’s difficult to get hair right fitting those properties into G-Buffer.

I’m also quite focused on performance, love low-level and analyzing assembly and it just hurts me to see branches and tons of additional instructions (sometimes up to hundreds…) and registers used to branch for various materials in the typical deferred shading shader. I agree that the performance impact can be not really significant compared to bandwidth usage on fat GBuffers and complex lighting models, but still it’s the cost that you pay for whole screen even though hair pixels don’t occupy too much of the screen area.

One of tricks we used on The Witcher 2 was faking hair specular using only dominant light direction + per character cube-maps and applying it as “emissive” mesh lighting part. It worked ok only because of really great artists authoring those shaders and cube-maps, but I wouldn’t say it is an acceptable solution for any truly next-gen game.

Therefore hair really needs forward shading – but how to do it efficiently and not pay the usual overdraw cost and combine it with deferred shading?

Aliasing problem.

A nightmare of anyone using alpha-tested quads or meshes with hair strands for hair. Lots of games can look just terrible because of this hair aliasing (the same applies for foliage like grass). Epic proposed to fix it by using MSAA, but this definitely increases the rendering cost and doesn’t solve all the issues. I tried to do it using alpha-to-coverage as well, but the result was simply ugly.

Far Cry 3 and some other games used screen-space blur on hair strands along the hair tangenta and it can improve the quality a lot, but usually end parts of hair strands either still alias or bleed some background onto hair (or the other way around) in non-realistic manner.

Obvious solution here is again to use forward shading and transparency, but then we will face other family of problems: overdraw, composition with transparents and problems with transparency sorting. Again, AMD Tress FX solved it completely by using order-independent transparency algorithms on just hair, but the cost and effort to implement it can be too much for many games.

Proposed solution

The solution I tried and played with is quite similar to what Crytek described that they tried in their GDC 2014 presentation. I guess we prototyped it independently in similar time frame (mid-2012?). Crytek presentation didn’t dig too much into details, so I don’t know how much it overlaps, but the core idea is the same. Another good reference is this old presentation from Scheuermann from ATI at GDC 2004! Their technique was different and based only on forward shading pipeline, not aimed to combined with deferred shading – but the main principle of multi pass hair rendering and treating transparents and opaque parts separately is quite similar. Thing worth noting is that with DX11 and modern GPU based forward lighting techniques it became possible to do it much easier. :)

Proposed solution is a hybrid of deferred and forward rendering techniques to solve some problems with it. It is aimed for engines that still rely on hair alpha tested stripes for hair rendering, have fluent alpha transition in the textures, but still most of hair strands are solid, not transparent and definitely not sub-pixel (then forget about it and hope you have the perf to do MSAA and even supersampling…). You also need to have some form of forward shading in your engine, but I believe that’s the only way to go for the next gen… Forward+/clustered shading is a must for material variety and properly lit transparency – even in mainly deferred rendering engines. I really believe in advantages of combining deferred and forward shading for different rendering scenarios within a single rendering pipeline.

Let me describe first proposed steps:

  1. Render your hair with full specular occlusion / zero specularity. Do alpha testing in your shaders with value Aref close to 1.0. (Artist tweakable).
  2. Do your deferred lighting passes.
  3. Render forward pass of hair speculars with no alpha blending, z testing set to “equal”. Do the alpha testing exactly like in step 1.
  4. Render forward pass of hair specular and albedo for hair transparent part with alpha blending (scaled from 0 to Aref to 0-1 range), inverse alpha test (1-Aref) and regular depth test.

This algorithm assumes that you use regular Lambertian hair diffuse model. You can easily swap it, feel free to modify point 1 and 3 and first draw black albedo into G-Buffer and add the different diffuse model in step 3.

 Advantages and disadvantages

There are lots of advantages of this trick/algorithm – even with non-obvious hair mesh topologies I didn’t see any problems with alpha sorting – because alpha blended areas are small and usually on top of solid geometry. Also because most of the rendered hair geometry writes depth values it works ok with particles and other transparents. You avoid hacking of your lighting shaders, branching and hardcore VGPR counts. You have smooth and aliasing-free results and a proper, any shading model (not needing to pack material properties). It also avoids any excessive forward shading overdraw (z-testing set to equal and later regular depth testing on almost complete scene). While there are multiple passes, not all of them need to read all the textures (for example no need to re-read albedo after point 1 and G-Buffer pass can use some other normal map and no need to read specular /gloss mask). The performance numbers I had were really good – as hair covers usually very small part of the screen except for cutscenes – and proposed solution meant zero overhead/additional cost on regular mesh rendering or lighting.

Obviously, there are some disadvantages. First of all, there are 3 geometry passes for hair (one could get them to 2, combining points 3 and 4, but getting rid of some of advantages). It can be too much, especially if using some spline/tessellation based very complex hair – but this is simply not an algorithm for such cases, they really do need some more complex solutions… Again, see Tress FX. There can be a problem of lack of alpha blending sorting and later problems with combining with particles – but it depends a lot on the mesh topology and how much of it is alpha blended. Finally, so many passes complicate renderer pipeline and debugging can be problematic as well.

 

Bonus hack for skin subsurface scattering

As a bonus description how in a very similar manner we hacked skin shading in The Witcher 2.

We couldn’t really separate our speculars from diffuse into 2 buffers (already way too many local lights and big lighting cost, increasing BW on those passes wouldn’t help for sure). We didn’t have ANY forward shading in Red Engine at the time as well! For skin shading I really wanted to do SSS without blurring neither albedo textures nor speculars. Therefore I came up with following “hacked” pipeline.

  1. Render skin texture with white albedo and zero specularity into G-Buffer.
  2. During lighting passes always write specular not modulated by specular color and material properties into the alpha channel (separate blending) of lighting buffer.
  3. After all lights we had diffuse response in RGB and specular response in A – only for skin.
  4. Do a typical bilateral separable screen space blur (Jimenez) on skin stencil-masked pixels. For masking skin I remember trying both 1 bit from G-Buffer or “hacking” test for zero specularity/white albedo in the G-Buffer – both worked well, don’t remember which version we shipped though.
  5. Render skin meshes again – multiplying RGB from blurred lighting pixels by albedo and adding specularity times the specular intensity.

The main disadvantage of this technique is losing all specular color from lighting (especially visible in dungeons), but AFAIK there was a global, per-environment artist specified specular color multiplier value for skin. A hack, but it worked. Second, smaller disadvantage was higher cost of SSS blur passes (more surfaces to read to mask the skin).

In more modern engines and current hardware I honestly wouldn’t bother, do separate lighting buffers for diffuse and specular responses instead, but I hope it can inspire someone to creatively hack their lighting passes. :)

References

[1] http://www.filmicworlds.com/2014/05/31/materials-that-need-forward-shading/

[2] http://udn.epicgames.com/Three/rsrc/Three/DirectX11Rendering/MartinM_GDC11_DX11_presentation.pdf

[3] http://www.crytek.com/download/2014_03_25_CRYENGINE_GDC_Schultz.pdf

[4] http://developer.amd.com/tools-and-sdks/graphics-development/graphics-development-sdks/amd-radeon-sdk/

[5] https://developer.nvidia.com/hairworks 

[6] “Forward+: Bringing Deferred Lighting to the Next Level” Takahiro Harada, Jay McKee, and Jason C.Yang https://diglib.eg.org/EG/DL/conf/EG2012/short/005-008.pdf.abstract.pdf

[7] “Clustered deferred and forward shading”, Ola Olsson, Markus Billeter, and Ulf Assarsson http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdf

[8] “Screen-Space Perceptual Rendering of Human Skin“, Jorge Jimenez, Veronica Sundstedt, Diego Gutierrez

[9] “Hair Rendering and Shading“, Thorsten Scheuermann, GDC 2004

Posted in Code / Graphics | Tagged , , , , , | 3 Comments

C#/.NET graphics framework on GitHub + updates

As I promised I posted my C#/.NET graphics framework (more about it and motivation behind it here) on GitHub: https://github.com/bartwronski/CSharpRenderer

This is my first GitHub submit ever and my first experience with Git, so there is possibility I didn’t do something properly – thanks for your understanding!

List of changes since initial release is quite big, tons of cleanup + some crashfixes in previously untested conditions, plus some features:

Easy render target management

I added helper functions to manage lifetime of render targets and allow render target re-use. Using render target “descriptors” and RenderTargetManager you request a texture with all RT and shader resource views and it is returned from a pool of available surfaces – or lazily allocated when no surface fitting given descriptor is available. It allows to save some GPU memory and makes sure that code is 100% safe when changing configurations – no NULL pointers when enabling not enabled previously code paths or adding new ones etc.

I also added very simple “temporal” surface manager – that for every surface created with it stores N different physical textures for requested N frames. All temporal surface pointers are updated automatically at beginning of a new frame. This way you don’t need to hold states or ping-pong in your rendering passes code and code becomes much easier to follow eg.:

RenderTargetSet motionVectorsSurface = TemporalSurfaceManager.GetRenderTargetCurrent("MotionVectors");
RenderTargetSet motionVectorsSurfacePrevious = TemporalSurfaceManager.GetRenderTargetHistory("MotionVectors");
m_ResolveMotionVectorsPass.ExecutePass(context, motionVectorsSurface, currentFrameMainBuffer);

Cubemap rendering, texture arrays, multiple render target views

Nothing super interesting, but allows to much more easily experiment with algorithms like GI (see following point). In my backlog there is a task to add support for geometry shader and instancing for amplification of data for cubemaps (with proper culling etc.) that should speed it up by order of magnitude, but wasn’t my highest priority.

Improved lighting – GI baker, SSAO

I added 2 elements: temporally supersampled SSAO and simple pre-baked global illumination + fully GPU-based naive GI baker. When adding those passes I was able to really stress my framework and check if it works as it is supposed to – and I can confirm that adding new passes was extremely quick and iteration times were close to zero – whole GI baker took me just one evening to write.

csharprenderer_withgi

GI is stored in very low resolution, currently uncompressed volume textures – 3 1MB R16 RGBA surfaces storing incoming flux in 2nd order SH (not preconvolved with cosine lobe – not irradiance). There are some artifacts due to low resolution of volume (64 x 32 x 64), but for cost of 3MB for such scene I guess it’s good enough. :)

It is calculated by doing cubemap capture at every 3d grid voxel, calcularing irradiance for every texel and projecting it onto SH. I made sure (or I hope so! ;) but seems to converge properly) it is energy conserving, so N-bounce GI is achieved by simply feeding previous N-1 bounce results into GI baker and re-baking the irradiance. I simplified it (plus improved baking times – converges close to asymptotic value faster) even a bit more, as baker uses partial results, but with N -> oo it should converge to the same value and be unbiased.

It contains “sky” ambient lighting pre-baked as well, but I will probably split those terms and store separately, quite possibly at a different storage resolution. This way I could simply “normalize” the flux and make it independent of sun / sky color and intensity. (it could be calculated in the runtime). There are tons of other simple improvements (compressing textures, storing luma/chroma separately in different order SH, optimizing baker etc) and I plan to gradually add them, but for now the image quality is very good (as for something without normalmaps and speculars yet ;) ).

Improved image quality – tone-mapping, temporal AA, FXAA

Again nothing that is super-interesting, rather extremely simple and usually unoptimal code just to help debugging other algorithms (and make their presentation easier). Again adding such features was matter of minutes and I can confirm that my framework succeeds so far in its design goal.

Constant buffer constants scripting

A feature that I’m not 100% happy with.

For me when working with almost anything in games – from programming graphics and shaders through materials/effects to gameplay scripting the biggest problem is finding proper boundaries between data and code. Where splitting point should be? Should code drive data, or the other way around. From multiple engines I have worked on (RedEngine, Anvil/scimitar, Dunia plus some very small experience just to familiarize myself with CryEngine, UnrealEngine 3, Unity3D) in every engine it was in a different place.

Coming back to shaders, usually tedious task is putting some stuff on the engine side in code, and some in the actual shaders while both parts must mach 100%. It not only makes it more difficult to modify some of such stuff, adding new properties, but also harder to read and follow code to understand the algorithms as it is split between multiple files not necessarily by functionality, but for example performance (eg. precalculate stuff on CPU and put into constants).

Therefore my final goal would be to have one meta shader language and using some meta decorators specify frequency of every code part – for example one part should be executed per frame, other per viewport, other per mesh, per vertex, per pixel etc. I want to go in this direction, but didn’t want to get myself into writing parsers and lexers and temporarily I used LUA (as extremely fast to integrate and quite decently performing).

Example would be one of my constant buffer definitions:

cbuffer PostEffects : register(b3)
{
 /// Bokeh
 float cocScale; // Scripted
 float cocBias; // Scripted
 float focusPlane; // Param, Default: 2.0, Range:0.0-10.0, Linear
 float dofCoCScale; // Param, Default: 0.0, Range:0.0-32.0, Linear
 float debugBokeh; // Param, Default: 0.0, Range:0.0-1.0, Linear
 /* BEGINSCRIPT
 focusPlaneShifted = focusPlane + zNear
 cameraCoCScale = dofCoCScale * screenSize_y / 720.0 -- depends on focal length & aperture, rescale it to screen res
 cocBias = cameraCoCScale * (1.0 - focusPlaneShifted / zNear)
 cocScale = cameraCoCScale * focusPlaneShifted * (zFar - zNear) / (zFar * zNear)
 ENDSCRIPT */
};

We can see that 2 constant buffer properties are scripted – there is zero code on C# side that would calculate it like this, instead a LUA script is executed every frame when we “compile” constant buffer for use by the GPU.

UI grouping by constant buffer

Simple change to improve readability of UI. Right now the UI code is the most temporary, messy part and I will change it completely for sure, but for the time being I focused on the use of it.

constant_buffer_grouping

Further hot-swap improvements

Right now everything in shader files and related to shaders is hot-swappable – constant buffer definitions, includes, constant scripts. Right now I can’t imagine working without it, definitely helps iterating faster.

Known issues / requirements

I was testing only x64 version, 32 bit could be not configured properly and for sure is lacking proper dll versions.

One known issue (checked on a different machine with Windows 7 / x64 / VS2010) is runtime exception complaining about lack of “lua52.dll” – it is probably caused by lack of Visual Studio 2012+ runtime.

Future plans

While I update stuff every week/day in my local repo, I don’t plan to do any public commits (except for something either cosmetic, or serious bug/crash fix) till probably late August. I will be busy preparing for my Siggraph 2014 talk and plan to release source code for the talk using this framework as well.

Posted in Code / Graphics | Tagged , , , , , | Leave a comment