Update: I gave an update to this talk following the completion of my project on the 19th May. The revised slides are available here
So I’m about to embark on my final year research project in order to complete my BSc Computing degree at the University of Leeds. I have decided to study the pre-computation of globally illuminated isosurfaces and the benefits of using 4D flattened light transport to decouple this illumination from isosurface generation1.
After agonising over a few papers on this subject, I’m rather wishing I’d chosen to study something a little easier to comprehend! On the bright side, at least the exam period has ended and I’m (relatively) free to dedicate every waking hour to the task at hand.
This post however is not an introspective into my current personal freak out, but an attempt to collate my initial thoughts on this fascinating subject and hopefully present an interesting introduction aimed at your average curious geek. Let’s jump in …
First off, what is an isosurface?
An isosurface is simply a 3D contour. Contours are not limited to 2D drawings; 3D data contains contours in the form of isosurfaces, which can reveal much about the underlying phenomena. However, unlike the 2D case, drawing multiple contours is not usually helpful, as one contour can obscure others. Instead, users are given interactive tools and are expected to explore the data by selecting various contour values (isovalues). Another way of thinking about isosurfaces is as the level set of a continuous function whose domain is 3D-space. Isosurfaces are mainly used for scientific datasets such as viewing imagery generated from CT scans based on density value. By interactively changing the isovalue we can reveal internal structures of a scalar dataset such as bones or the internal mechanics of an engine.
To generate my isosurfaces, I will be using the Marching Cubes algorithm. This method marches through the dataset taking each set of 8 adjacent vertices (together, a voxel or cell) and figures out which edges are crossed by the surface - interpolating the edge points to find out exactly where the contour crosses. With these interpolated points we can draw a set of triangles for each cell such that when combined, together form a complete surface.
To figure out which edges are crossed requires a lookup table of all possible triangle combination. In total there are 256 (28) possible combinations, although this reduces to 15 when we only count topologically unique cases. Of the 15 cases, 6 are ambiguous, which can lead to holes in the generated surface. This problem can be solved using variety of methods2, including using a lookup table where each case has been chosen using a consistent method.
The need for global illumination
Global illumination methods such as ray-tracing and photon mapping have traditionally been confined to use in feature films (think Pixar) and artistic renderings. The reason for this is the computational cost of generating such high quality illumination - a typical frame in a Pixar movie takes an average of 6 hours to render3 (per workstation). With increases in available computing power and the emergence of techniques to precompute high quality illumination, real-time globally illuminated rendering using consumer hardware has come within the realms of possibility.
This presents a wonderful opportunity for scientific and medical researchers among others who currently use local illumination methods (such as Phong Shading) to visualise their datasets. Whilst the more advanced local illumination methods are relatively good for simple models, they are easily surpassed by global illumination when complex models are lit. Depth perception is vastly improved over local illumination and realism improved by accurately modelling lighting characteristics such as diffuse inter-reflection, caustics and penumbra.
The most realistic lighting solution possible is formulated using the rendering equation - an integral equation that we usually compute recursively. Each global illumination method solves the equation differently but all involve evaluating expensive ray casting functions since each outgoing scattered ray needs to be tested against every object in the scene. The scattering of the rays is computed using the bidirectional reflectance distribution function (BRDF), which determines the material characteristics of each surface.
Real time?! You must be kidding…
So with all this prohibitively expensive computation necessary to figure out the lighting for an isosurface, wouldn’t it be great it we could somehow just precompute all that stuff then apply it to our dynamically generated surfaces when we want to visualise and interact with them? This is a topic many researchers have been working on in recent years - the primary paper I am studying describes a new method to generate an 3D illumination grid which we can apply in real-time to an isosurface to simulate global illumination.
The trick to decoupling illumination from surface generation is to light the scalar dataset in four dimensions and generate the illumination for all possible 3D isosurfaces within. This is analogous to lighting a 2D function in three dimensions (lighting the graph) to generate a 2D texture of illumination values which can be texture mapped onto the original function, thereby as a consequence lighting the infinite series of isolines (contours) that make up the 2D function. Lighting all isolines simultaneously is not exactly the same as lighting each one individually so to compensate for this, we flatten all ray scattering to operate within a plane. This stops stray reflections from the illumination of one isoline effecting the illumination of a separate isoline. This flattening that allows us to effectively light all contours in one operation is really the key to this technique. Now that we’ve explained it for two dimensions…
It is more difficult to visualise lighting an surface in 4D, but the principles are the same as in the analogy above. To make this possible we use the flattened light transport which means that we actually perform the lighting transport calculations one dimension lower. For lighting in 4D this allows us to use the same equations as we would to light a regular object in 3D. This handy fact means we can tweak any of the usual global illumination methods such as ray-tracing to work for our purposes.
Once these concepts make sense, you realise that in theory the technique is actually quite simple and instead of lots of scary maths, it actually just involves normal scary maths. I will be finding out how hard this is in practice however!
I hope to continue sharing my progress and any useful code snippets here as I learn more about the subject. If you have any questions about global illumination or flattened light transport then by all means question me - If I can’t answer then it means I don’t fully understand what I am talking about and part of the reason for writing this is to continuously evaluate my understanding.
The project will be based on the following paper: Decoupling Illumination from Isosurface Generation Using 4D Light Transport David C. Banks, Kevin M. Beason ↩