Python as scientific toolbox – 8 months later

I started this blog with a simple post about my attempts to find free Mathematica replacement tool for general scientific computing with focus on graphics. At that time I recommended scientific Python and WinPython environment.
Many months have passed, I used lots of numerical Python at home, I used a bit of Mathematica at work and I would like to share my experiences – both good and bad as well as some simple tips to increase your productivity. This is not meant to be any kind of detailed description, guide or even tutorial – so if you are new to Python as scientific toolset, I recommend you to check out great Scientific Python 101 by Angelo Pesce before reading my post.
My post is definitely not exhaustive and is very personal – if you have different experiences or I got something wrong – please comment! :)

Use Anaconda distribution

In my original post I recommended WinPython. Unfortunately, I don’t use it anymore and at the moment I definitely can vote for Anaconda. One quite obvious reason for that is that I started to use MacBook Pro and Mac OSX – WinPython doesn’t work there. I’m not a fan of having different working environments and different software on different machines, so I had to find something working on both Win and MacOSX.

Secondly, I’ve had some problems with WinPython. It works great as a portable distribution (it’s very handy to have it on USB key), but once you want to make it essential part of your computational environment, problems with its registration in system start to appear. Some packages didn’t want to install, some other ones had problems to update and there were conflicts in versions. I even managed to break distro by desperate attempts to make one of packages work.

Anaconda is great. Super easy to install, has tons of packages, automatic updater and “just works”. Its registration with system is also good and “works”. Not all interesting packages are available through its packaging system, but I found no conflicts so far with Python pip, so you can work with both.

At the moment, my recommendation would be – if you have administrative rights on a computer, use Anaconda. If you don’t (working not on your computer), or want to go portable, have WinPython on your USB key – might be handy.

Python 2 / 3 issue is not solved at all

This one is a bit sad and ridiculous – perfect example of what goes wrong in all kinds of open source communities. When someone asks me if they should get Python 2.7+ or 3.4+, I simply don’t have an easy answer – I don’t know. Some packages don’t work with Python 3, some others don’t work with Python 2 anymore. I don’t feel there is any strong push for Python 3, for “compatibility / legacy reasons”… Very weird situation and definitely blocks development of the language.

At the moment I use Python 2, but try to use imports from __future__ and write everything compatible with Python 3, so I won’t have problems if and when I switch. Still, I find lack of push in the community quite sad and really limiting the development/improvement of the language.

Use IPython notebooks

My personal mistake was that for too long I didn’t use the IPython and its amazing notebook feature. Check out this presentation, I’m sure it will convince you. :)

I was still doing oldschool code-execute-reload loop that was hindering my productivity. With Sublime Text and Python registered in the OS it is not that bad, but still, with IPython you can get way better results. Notebooks provide interactivity maybe not as good as Mathematica, but comparable to and much better than regular software development loop. You can easily re-run, change parameters, debug, see help and profile your code and have nice text, TeX or image annotations. IPython notebooks are easy to share, store and to come back to later.

Ipython as shell is also quite ok itself – even as environment to run your scripts from (with handy profiling macros, help or debugging).

NumPy is great and very efficient…

NumPy is almost all you need for your basic numerical work. SciPy linear algebra packages (like distance arrays, least squares fitting or other regression methods) provide almost everything else. :) For stuff like Monte Carlo, numerical integration, pre-computing some functions and many others I found it sufficient and performing very well. Slicing and indexing options can be not obvious at beginning, but once you get some practice they are very expressive. Big volume operations can boil down to a single expression with implicit loops over many elements that are internally written in efficient C. If you ever worked with Matlab / Octave you will feel very comfortable with it – to me it is definitely more readable than weird Mathematica syntax. Also interfacing with file operations and many libraries is trivial – Python becomes expressive and efficient glue code.

…but you need to understand it and hack around silent performance killers

On the other hand, using NumPy very efficiently requires quite deep understanding of its internal way of working. This is obvious and true in case of any programming language, environment or algorithm – but unfortunately in case of numerical Python it can be very counter-intuitive. I won’t cover examples here (you can easily find numerous tutorials on numpy optimizations), but often writing efficient code means writing not very readable and not self-documenting code. Sometimes there are absurd situations like some specialized functions performing worse than generic ones, or need to write incomprehensible hacks (funniest one was suggestion to use complex numbers as most efficient way for simple Euclidean distance calculations)… Hopefully after couple numerically heavy scripts you will understand when NumPy does internal copies (and it does them often!), that any Python iteration over elements will kill your perf, that you need to try to use implicit loops and slicing etc.

There is no easy way to use multiple cores

Unfortunately, multithreading, multitasking and parallelism are simply terrible in Python. Whole language wasn’t designed to be multitasked / multithreaded and Global Interpreter Lock as part of language design makes it a problem almost impossible to solve. Even if most NumPy code releases GIL, there is quite a big overhead from doing so and other threads becoming active – you won’t notice big speed-ups if you don’t have really huge volumes of work done in pure, single NumPy instructions. Every single line of Python glue-code will become a blocking, single-threaded path. And according to Amdahl’s law, it will make any massive parallelism impossible. You can try to work around it using multiprocessing – but in such case it is definitely more difficult to pass and share data between processes. I haven’t researched it exhaustively – but anyway no simple / annotation based (like in OpenMP / Intel TBB) solution exists.

SymPy cannot serve as replacement for Mathematica

I played with SymPy just several times – it definitely is not any replacement for symbolic operations in Mathematica. It works ok for symbol substitution, trivial simplification or very simple integrals (like regular Phong normalization), but for anything more complex (normalizing Blinn-Phong level… yeah) it doesn’t work – after couple minutes (!) of calculations produces no answer. Its syntax is definitely not as friendly for interactive work like Mathematica as well. So for symbolic work it’s not any replacement at all and isn’t very useful. One potential benefit of using it is that it embeds nicely and produces nice looking results in IPython notebooks – can be good for sharing them.

No very good interactive 3D plotting

There is matplotlib. It works. It has tons of good features

…But its interactive version is not embeddable in IPython notebooks, 3D plotting runs very slow and is quite ugly. In 2D there is beautiful Bokeh generating interactive html files, but nothing like that for 3D. Nothing on Mathematica level.

I played a bit with Vispy – if they could create as good WebGL backend for IPython notebooks like they promise, I’m totally for it (even if I have to code visualizations myself). Until then it is “only” early stage project for quickly mapping between numerical Python data and simple OpenGL code – but very cool and simple one, so it’s fun to play with it anyway. :)

There are packages for (almost) everything!

Finally, while some Python issues are there and I feel won’t be solved in the near future (multithreading), situation is very dynamic and changes a lot. Python becomes standard for scientific computing and new libraries and packages appear every day. There are excellent existing ones and it’s hard to find a topic that wasn’t covered yet. Image processing? Machine learning? Linear algebra? You name it. Just import proper package and adress the problem you are trying to solve, not wasting your time on coding everything from scratch or integrating obscure C++ libraries.
Therefore I really believe it is worth investing your time in learning it and adapting to your workflow. I wish it became standard for many CS courses at universities instead of commercial Matlab, poorly interfaced Octave or professors asking students to write whole solutions in C++ from scratch. At least in Poland they definitely need more focus on problems, solutions and algorithms, not on coding and learning languages…

Posted in Code / Graphics | Tagged , , , , , , | 1 Comment

New debugging options in CSharpRenderer framework

Hi, minor update to my C#/.NET graphics rendering framework / playground got just submitted to GitHub. I implemented following new features:

Surface debugging snapshots

One of commentators asked me how to easily display for debug SSAO buffer – I had no easy answer (except for hacking shaders). I realized that very often debugging requires displaying various buffers that can change / get overwritten in time – we cannot rely simply on grabbing such surface at the end of the frame for display…

csharprenderer_surfacedebug

Therefore I added option to create in code various surface “snapshots” that get copied to a debug buffer if needed. They are copied at given time only if user requested such copy. You can display RGB, A or fractional values (useful for depth / world position display). In future there could be some options for linearization / sRGB, range stretching clamping etc., but for now I didn’t need it. :)

Its use is trivial – in code after rendering information that you possibly could want debugged – like SSAO and its blurs, write:


// Adding SSAO debug
SurfaceDebugManager.RegisterDebug(context, "SSAOMain", ssaoCurrent);

// Do SSAO H blurring (...)

// Adding SSAO after H blur debug
SurfaceDebugManager.RegisterDebug(context, "SSAOBlurH", tempBlurBuffer);

// Adding SSAO after V blur debug
SurfaceDebugManager.RegisterDebug(context, "SSAOBlurV", tempBlurBuffer);

No additional shader code is needed and debug display is handled on “program” level. Passes get automatically registered and UI refreshed.

GPU debugging using UAVs

Very often when writing complex shader code (especially compute shaders) we would like to have some “printf” / debugging options. There are many tools for shader debugging that (I personally recommend excellent and open-source RenderDoc), but often launching external tools adds time overhead and it can be not possible to debug everything in case of complex compute shaders.

We would like to have “printf” like functionality form the GPU. While no APIs provide it, we can use append buffers UAVs and simply hack it in shaders and later either display such values on screen or lock/read them on the CPU. This is not a new idea.

I implemented very rough and basic functionality in the playground and made it work for both Pixel Shaders and Compute Shaders.

csharprenderer_uavdebug

It doesn’t require any regular code changes, just shader file change + checking option in UI to ON.

In pixel shader one would write something like:

float3 lighting = finalLight.diffuse * albedo + finalLight.specular;

if (DEBUG_FILTER_VPOS(i.position, 100, 100))
{
 DebugInfo(i.position.xyz, lighting);
}

In compute shader you would write accordingly:

float4 finalOutValue = float4(lighting * scattering, scattering + absorbtion);

if (DEBUG_FILTER_TID(dispatchThreadID, 10, 10, 10))
{
 DebugInfo(dispatchThreadID, finalOutValue);
}

If you don’t want to specify / filter values manually in shader you don’t have to – you can override position filter in UI and use DEBUG_FILTER_CHECK_FORCE_VPOS and DEBUG_FILTER_CHECK_FORCE_TID macros instead.

If you want to debug pixels you can even automatically set those filter values in UI from last viewport click position (hacked, returns coords only in full resolution, but can be useful when trying to track negative values / NaN sources).

Minor / other

  • Improved a bit temporal AA based on luma clamping only – loosely inspired by excellent Brian Karis UE4 Siggraph AA talk. :)
  • Added small dithering / noise to the final image – I quite like this subtle effect.
  • Extracted global / program constant buffer – very minor, but could help reorganizing code in the future.
  • Option to freeze the time – useful for debugging / comparison screenshots.

 

Posted in Code / Graphics | Tagged , , , , , | Leave a comment

Updated Poisson-like generator with GUI and more

poisson

Just a super short note:

I updated my simple rendering-oriented Poisson-like pattern generator with:

  • Very simple GUI made in PyQt to make experimenting easier.
  • Option to do rotating disk (with minimizing rotated point distance) for things like Poisson bokeh / shadow maps PCF.
  • Better visualizations with guidelines.
  • …And optimized algorithm a bit.

I’m definitely finished working with it (unless I find some bugs and will fix them), it was mostly done to learn to create Python GUIs quickly and for fun. :)

And also I have started writing a longer blog note about my experiences with Python as scientific environment and free / open source alternative to Mathematica. What worked well, what didn’t and some tips & tricks to avoid my mistakes. :)

Stay tuned!

Posted in Code / Graphics | Tagged , , , , , , , , , | Leave a comment

Major C#/.NET graphics framework update + volumetric fog code!

As I already promised too many times, here comes major CSharpRenderer framework update!

As always, all code available on GitHub.

Note that the goal is still the same – not to write most beautiful or fast code, but to provide a prototype playground / framework for hacking and having fun with iteration times approaching 0. :) It still will undergo some major changes.

Apart from volumetric code as example for my Siggraph talk (which is not in perfect shape code quality wise – it is supposed to be a quickly written demo of this technique; note also that this is not the code that was used for shipping game, it is just a demo; original code had some NDAd and console specific optimizations), other major changes cover:

“Global” shader defines visible from code

You can define some constant as “global” one in shader and immediately have it reflected in C# side after changing / reloading. This way I removed some data / code duplication and potential for mistakes.

Example:


// shader side

#define GI_VOLUME_RESOLUTION_X 64.0 // GlobalDefine

// C# side

m_VolumeSizeX = (int)ShaderManager.GetUIntShaderDefine("GI_VOLUME_RESOLUTION_X");

Derivative maps

Based on old but excellent post by Rory Driscoll. I didn’t see much sense in computing tangent frames in mesh preprocessing for needs of such simple framework. I used “hack” of using normal maps as derivative map approximation – doesn’t really care in such demo case.

“Improved” Perlin noise textures + generation

Just some code based on state of the art article from GPU Pro by Simon Green. Used in volumetric fog for some animated, procedural effect.

Very basic implementation of BRDFs

GGX Specular based on a very good post about optimizing it by John Hable.

Note that lighting code is a bit messy now, its major clean-up is my next task.

 

Minor changes added are:

  • UI code clean-up and dynamic UI reloading/recreating after constant buffer / shader reload.
  • Major constants renaming clean-up.
  • Actually fixing structured buffers.
  • Some simple basic geometric algorithms I found useful.
  • Adding shaders to project (actually I had it added, have no idea why it didn’t get in the first submit…).
  • Some more easy-to-use operations on context (blend state, depth state etc.).
  • Simple integers supported in constant buffer reflection.
  • Other type of temporal AA – accumulation based, trails a bit – I will later try to apply some ideas from excellent Epic UE4 AA talk.
  • Time-delta based camera movement (well, yeah…).
  • Fixed FPS clamp – my GPU was getting hot loud. :)
  • More use of LUA constant buffer scripting – it is very handy and serves purpose very well.
  • Simple basis for “particle” rendering based on vertex shaders and GPU buffer objects.
  • Some stupid animated point light.
  • Simple environment BRDF approximation by Dimitar Lazarov from Black Ops 2

Future work

Within next few weeks I should update it with:

  • Rewriting post-effects, tone-mapping etc.
  • Adding GPU debugging
  • Improving temporal techniques
  • Adding naive screens-pace reflections and an env cube-map
  • Adding proper area light support (should work super-cool with volumetric fog!)
  • Adding local lights shadows
Posted in Code / Graphics | Tagged , , , , , , , , | 11 Comments

Siggraph 2014 talk slides are up!

As promised during my talk, I have just added Siggraph 2014 Advances in the Real-Time rendering slides, check them out at my Publications page. Some extra future ideas I didn’t manage to cover in time are in bonus slides section, so be sure to check it out.

They should be also soon online at “Advances in the Real-Time” web page. When they land there, check them the whole page out as there was lots of amazingly good and practical content in this course this year! Thanks again to Natalya Tatarchuk for organizing the whole event.

…Also I promised that I will release some source code soon, so stay tuned! :)

Posted in Code / Graphics | Tagged , , , , , , , | Leave a comment

Voigtlander Nokton Classic 40mm f1.4 M on Sony A7 Review

As I promised, my delayed review of Voigtlander Nokton Classic 40mm f1.4 M used on Sony Alpha A7. First I’m going to explain some “mysterious” (lots of questions in the internet!) aspects of this lens.

Why 40mm?

So, first of all – why such weird focal length like 40mm, while there are tons of great M-mount 35mm and 50mm lenses? :)
I’ve always had problems with “standard” and “wide-standard” focal lengths. Honestly, 50mm feels too narrow. It’s great for neutral upper-body or full-body portraits and shooting in open-door environments, but definitely limiting in interiors and for situational portraits.
In theory, it was supposed to be a “neutral” focal length, similar to human perception of perspective, but is a bit narrower. So why so many 50mm lens and they are considered standard? Historical reasons and optics – they are extremely easy to produce and correct any kinds of optical problems (distortion, aberration, coma etc.) and require less optical elements than other kinds of lenses to achieve great results.
On the other hand, 35mm usually catches too much environment and photos get a bit too “busy”, while it’s still not true wide angle lens for amazing city or landscape shots.
40mm feels just right as a standard lens. Lots of people recommend against 40mm on rangefinders, as Leica and similar don’t have any framings for 40mm. But on digital full frame mirrorless with great performing EVF? No problem!
Still, this is just personal preference. You must decide on your own if you agree, or maybe prefer something different. :) My advice on picking focal lengths is always – spend a week and take many photos in different scenarios using cheap zoom kit lens. Later check the EXIF data and check what kinds of focal lengths you used for the photos you enjoy the most.
Great focal length for daily "neutral" shooting.

Great focal length for daily “neutral” shooting.

What does it mean that this lens is “classic”?

There is lots of bs in the internet about “classic” lens design. Some people imply that it means that lens is “soft in highlights”. Obviously this makes no sense, as sharpness is not a function of brightness – either lens is soft or sharp. It can mean transmittance problems wrongly interpreted, but what’s the truth?
Classic design usually means design of lenses relating to historical designs of earlier XX century. Lenses were designed this way before introduction of complex coating and many low-dispersion / aspherical elements. Therefore, they have relatively lower number of elements – as without modern multi-coating and according to Fresnel law on every contact point between glass and air there was light transmission loss and light got partially reflected. Lack of proper lens coating resulted not only in poor transmission (less light getting to film / camera sensor) and lower contrast, but also in flares and various other artifacts coming from light bouncing inside the camera. Therefore number of optical elements and optical groups was kept a bit lower. With lower number of optical elements it is impossible to fix all lens problems – like coma, aberration, dispersion or even sharpness.
“Classic” lenses were also used with rangefinders that had quite large close-focusing range (usually 1m). All this disadvantages had a good side effect – lenses designed this way were much smaller.
And while Voigtlander Nokton Classic bases on “classic” lens design, it has modern optical element coating, a bit higher number of optical elements and keeps very small size and weight while fixing some of those issues.
Optical elements - notice how close pieces of glass are together (avoiding glass/air contact)

Optical elements – notice how close pieces of glass are together (avoiding glass/air contact)

What’s the deal with Single / Multi Coating?

I mentioned the effect of lens coating in previous section. For unknown reason, Voigtlander decided to release both truly “classic” version with single, simple coating and multi-coated version. Some websites try to explain it that a) single coating is cheaper b) some contrast and tranmission loss is not that bad when shooting on B&W film c) flaring can be desired effect. I understand this reasoning, but if you shoot anything in color, stick to the multi-coated version – no need to lose any light!
Even with Multi-coating, flaring at night can be a bit strong.

Even with Multi-coating, flaring of light sources at night can be a bit strong. Notice quite strong falloff and small coma in corners.

Lens handling

Love how classic and modern styles work great together on this camera/ lens combination

Love how classic and modern styles work great together on this camera / lens combination

Lens handles amazingly well on Sony A7. With EVF and monitor it’s really easy to focus even at f/1.4 (although takes a couple of days of practicing). Aperture ring and focus ring work super smooth. Size is amazing (so small!) even with adapter – advantage of M-Mount – lenses for M-mount were designed to have small distance to film. Some people mention problems on Sony A7/A7R/A7S with purple coloring on the corners on wider-angle Voigtlander lenses due to grazing angle between light and sensor – fortunately that’s not the case with Nokton 40mm 1.4.
Only disadvantage is that sometimes while eye at EVF i “lose” the focus tab and cannot locate it. Maybe it takes some time to get used to it?
In general, it is very enjoyable and “classic” experience, and it’s fun just to walk around with camera with Nokton 40mm on.

Image quality

I’m not a pixel-peeper and won’t analyze all micro-aspects on crop images or measure. Just conclusions from every day shooting. The lens I have (remember that every lens copy can differ!) is very sharp – has quite decent sharpness even at f/1.4 (although it is extremely easy with only slight movement to lose focus…). Performance is just amazing at night – great lens for wide-opened f/1.4 night photos – you don’t have to pump ISO or fight with long shutter speed – just enjoy photography. :)
Pin-sharp with nice, a bit busy bokeh

Pin-sharp at f/1.4 with nice, a bit busy bokeh

Higher apertures = corner to corner sharpness

Higher apertures = corner to corner sharpness

Bokeh is a bit busy, gets “swirly” and squashed, sometimes can be distracting – but I like it this way. Depends on personal preferences. At f/1.4 with 40mm it can almost melt down the backgrounds. Some people complain about purple fringing (spectrochromatism) of bokeh – something I wrote about in my post about Bokeh scatter DoF. I didn’t notice it on almost any of my pictures, on one I removed it with one click in Lightroom – definitely not that bad.
Bokeh

Bokeh

Even with mediocre adapters can't complain about MF lens handling

At larger apertures bokeh gets quite “swirly”. Still lots of interesting 3D “pop”.

There is definitely some light fall-off at f/1.4 and f/2.0, but I never mind those kind of artifacts. Distortion is negligible in regular shooting – even architecture.
General contrast and micro-contrast is nice and there is this “3D” look to many photos. I really don’t understand complaints and see big difference compared to “modern” designed lenses – but I never used latest Summicron/Summilux so maybe I haven’t seen everything. ;)
Color definition is very neutral – no visible problematic coloring.
Performance is a bit worse in corners – still quite sharp, but some visible coma (squashing of image in plane perpendicular to radius).
Some fall-off and coma in corners. Still pretty amazing night photo - Nokton is truly deserved name.

Some fall-off and coma in corners. Still pretty amazing night photo – Nokton is truly deserved name.

Unfortunately, even with Multi-Coating, there is some flaring at night from very bright light sources. Fortunately I didn’t notice any ghosting that often comes with it.

Disadvantages

So far I have one, biggest problem with this lens – close focus range of 0.7m. It rules out many tricks with perspective on close-ups, any kind of even semi-macro photography (photos of food while at restaurant). While at f/1.4 you could have amazingly shallow DoF and wide bokeh, that’s not the case here, as you cannot set focus closer…  It can even be problematic for half-portraits. Big limitation and pity, otherwise the lens would be perfect for me – but on the other hand such focus range contributes to smaller lens size. As always – you cannot have only advantages (quality, size&weight, aperture and in this case close-focus range). Some Leica M-lenses have focus range of 1m – I don’t imagine shooting with such lenses…

Recommendations

Do I recommend this lens? Oh yes! Definitely great buy for any classic photography lover. You can use it on your film rangefinder (especially if you own Voigtlander Bessa) and on most of digital mirrorless camera. Great image quality, super pleasant handling, acceptable price – if you like 40mm and fast primes, then it’s your only option. :)
DSC00331
DSC00270
DSC00292
DSC00226
Image | Posted on by | Tagged , , , , , , , , , , , | 6 Comments

Poisson disk/square sampling generator for rendering

I have just submitted onto GitHub small new script – Poisson-like distribution sampling generator suited for various typical rendering scenarios.

Unlike other small generators available it supports many sampling patterns – disk, disk with a central tap, square, repeating grid.

It outputs ready-to-use (and C&P) patterns for both hlsl and C++ code. It plots pattern on very simple graphs.

Generated sequence has properties of maximizing distance for every next point from previous points in sequence. Therefore you can use partial sequences (for example only half or a few samples based on branching) and have proper sampling function variance. It could be useful for various importance sampling and temporal refinement scenarios. Or for your DoF (branching on CoC).

Edit: I added also an option to optimize sequences for cache locality. It is very estimate, but should work for very large sequences on large sampling areas.

Usage

Just edit the options and execute script: “python poisson.py“. :)

Options

Options are edited in code (I use it in Sublime Text and always launch as script, so sorry – no commandline parsing) and are self-describing.

# user defined options
disk = False # this parameter defines if we look for Poisson-like distribution on a disk (center at 0, radius 1) or in a square (0-1 on x and y)
squareRepeatPattern = True # this parameter defines if we look for "repeating" pattern so if we should maximize distances also with pattern repetitions
num_points = 25 # number of points we are looking for
num_iterations = 16 # number of iterations in which we take average minimum squared distances between points and try to maximize them
first_point_zero = disk # should be first point zero (useful if we already have such sample) or random
iterations_per_point = 64 # iterations per point trying to look for a new point with larger distance
sorting_buckets = 0         # if this option is > 0, then sequence will be optimized for tiled cache locality in n x n tiles (x followed by y) 

Requirements

This simple script requires some scientific Python environment like Anaconda or WinPython. Tested with Anaconda.

Have fun sampling! :)

Posted in Code / Graphics | Tagged , , , , , , , , | Leave a comment