Motivation

The other day, I wanted to render some refractive glass spheres in a Cinder OpenGL project. In search of ways of rendering them, I went down the rabbit hole of ray tracing…

I describe my goal as learning the concepts of ray tracing and eventually making something that runs in interactive framerate on special-purpose hardwares. Since my work greatly involves writing graphics software for one-of-a-kind installations, it allows me to aim at rendering quality rather than optimizing for general-purpose computers. In the meantime, since ray tracing an entire scene in 4K resolution is not yet performant enough on current hardware, I want to explore the possibility of hybrid rendering of ray tracing and rasterizing (with OpenGL), letting the raytracer only solve specific problmems that are hard for OpenGL programs (eg. area shadow, ambient occlusion, refraction, etc. ). With those in mind, I narrowed down my options to GPU based ray tracing, using CUDA/OpenCL, OpenGL Compute Shaders or OpenGL fragment shaders.

GPU based solutions seemed perfect for my usecase. It saves precious CPU cycles for issuing GL commands when doing hybrid rendering, which is a bottleneck. Ray tracing with CUDA, compared to compute shaders, intuively has less overhead, scales almost linearly, doesn’t mess with GL states, and is relatively easy to profile.

Nvidia OptiX is a ray tracing SDK that has been around for a couple years, and had been used in some successful commercial renderers such as Iray. It provides mechanisms for host-device communication, as well as fundamental infrastructure for ray tracing such as the Ray class, buffers, texture sampler, spatial subdivision acceleration structures, etc.. It doesn’t seem to prefer any specific ray tracing / path tracing techniques, making it perfect for all situations ranging from simple interactive ray tracing to physically based unbiased path tracing.

Getting started

The zeroth step of learning ray tracing with OptiX is configuring the environment, which is not easy. The samples shipped with OptiX use Cmake, but since I wanted to be able to integrate it in any existing Xcode / VS project, I did some homework and figured out the way to configure it on the Mac.

The trick is that you have to use Xcode 7.3 (as of 2016.10) and CUDA 8.0. For cross compiling with raw CUDA (without OptiX) you can use clang/llvm for C++ and nvcc for CUDA files and link with ld. With OptiX you’ll need to write a custom build script to compile CUDA files into .ptx machine code and load them at runtime.

Ray tracing basics

The next big issue is that I have to really distinguish between different types of ray tracing and be rigorous with terminology. All the tutorials and papers use those terminologies as if everyone already understands them, but it can be hard to get started when we are already confused about them. I now try to clarify some of them as a memo:

Ray casting

Mostly refers to two things (which usually overlap):

1) a practice of rendering a scene by casting a ray from the camera without bouncing it. It is a way of drawing geometry without using multiple GL draw commands, so a lot of people on shadertoy use it.

2) a technique for rendering “implicit” geometry that are defined by something different from a mesh, such as mathematical function, signed distance field, or voxel samples.

Ray tracing

When used in a broader sense it is the superset of all kinds of rendering techniques that involves casting rays, but when used in contrast to path tracing, it might mean simple Whitted style ray tracing that bounces rays for reflection and refraction and stuff like that, but not for diffusion and other global lighting models (which requires far more bounces and more samples to get a clean image).

I’d like to think of ray tracing (in narrow sense) as direct-lighting ray tracing, that is, something very similar to phong shading in GL, but with the ability to bounce finite amount of secondary rays. I used to have the impression that ray tracing always has to be physically based and very slow, but that is obviously wrong.

Path tracing

Usually means a more accurate version of ray tracing that rays propagate and terminate in a statistics based manner. Since it models the actual physics of light, it is the most accurate rendering technique that is commonly used in commercial image renderers. However, it is really slow as it’s nearly brute-force.

Photon mapping

For the above types of ray tracing, they cast rays from the camera, and they are called backward ray tracing. With photon mapping, conversely, it fires rays from light sources. Bidirectional ray tracing is the combination of the two approaches.

Here are some of my favorite resources on distinguishing those concepts:

http://home.lagoa.com/2014/04/ray-tracing-vs-path-tracing-in-plain-english/

http://raytracey.blogspot.com/2015/10/gpu-path-tracing-tutorial-1-drawing.html

Important techniques

The basic components for ray tracing are ray generation program, intersection program and radiance program. The ray generation program is what replaces the fixed function rasterizer in OpenGL. Although we have to code it instead of having it built in the framework, it allows us to implement different kinds of camera in a physically accurate way. It involves a bit of maths, but for my current purpose, it is quite simple and barely needs to be changed.

The intersection programs and radiance programs are what I need to write for different kinds of rendering. They are very much like vertex shaders and fragment shaders in OpenGL, but are more flexible and can be used really creatively.

Intersection program

It is where you tell the framework whether a ray intersects a geometry, and reports the intersection point, surface normal and other optional stuffs. The ray intersection for a box or sphere can be determined procedurally, while for a mesh renderer, it only requires a triangle intersection test.

Radiance program

It is invoked whenever there is an intersection, where you calculate the color of the intersection point. It can be as simple as using the lambert shading model or as complex as firing a bunch of bounced rays in a certain distribution to gather global illumination and reflection.

Distributed ray tracing

It is an improvement to naive ray tracing to achieve certain effects. Instead of firing rays in fixed directions, it fires multiple rays stochastically and integrates them with filters. It is very powerful; integrating over the light area gives area lighting, integrating over the camera lens gives depth of field effect, integrating over time gives motion blur, so on so forth.

Progress

2016/10/4

Figured out Xcode project settings for compiling CUDA source files within a Cinder project.

Built and studied smallpt path tracer (Github).

2016/10/11

Integrated Nvidia OptiX ray tracing SDK in Cinder OpenGL projects and wrote an integration guide for OS X.

Ported the OptiX official tutorial in Cinder, as a boilerplate of the hybrid rendering framework I’m going to build: Github

Screenshot Screenshot

Implemented progressive supersampling anti-ailiasing.

Aliased render Anti-aliased render

2016/10/14

Implemented area light. I wanted to recreate this reference render:

Reference (Image courtesy of LWJGL)

and here’s my result:

Area light test scene

Explored a progressive frame blending method using running average. It tries solve the problem of constantly discarding old frames and thus invalidating progressive image refining in interaction modes similar to VR HMD. It reduces noise and keeps framerate as high as possible at the cost of introducing motion blur. I’d like to further explore this technique in the future using adaptive decay factor, according to camera motion velocity.

2016/10/18

Implemented triangular mesh rendering with interpolated normals.

Did my first BRDF for glossy reflection.

Water drops with refractive material Glossy reflection Glossy reflection Glossy reflection Glossy reflection

Model courtesy of Raphael Rau (CC-BY-SA)

2017/6/29

Implemented some more material types.

Glass Gem Car in glass Teapot with shadow Car in diffuse Car with metal