Level-of-Detail Independent Voxel-Based Surface Approximations

A brief overview of my thesis work about surface approximations using voxels.

The most common method to represent geometry is using triangle models, as they are the most memory efficient building block. Even though, for realistic rendering scenario's, millions of triangles have to be rendered. This can lead to substantial rendering times.

One of the most promising solutions to this problem is using a smaller amount of triangles to represent the same model when viewed from afar. These representations are called Level Of Detail (LOD). Below you see an example of the Stanford bunny in multiple LODs.

There are still some problems with using LODs, they are difficult to control and there are popping artifacts when transitioning between two levels. Besides this, they also require more memory to store.

We use an alternative approach to store volumetric data using voxels, also known as 3D pixels or simply cubes. We store values at every corner of these voxels from which we can determine the distance to the closest surface. This format is called a scalar field.

Simply storing millions of voxels would consume gigabytes of data to even store simple models. There are many solutions to store the voxels more sparsely, a common one is called the octree. Our structure is build on a Sparse Voxel Octree (SVO). Where instead of storing all children in an octree, we only store pointers, to reduce memory requirements even further.

Our main focus was to create a method to measure the error in surface approximation for a single voxel. We achieved this result by converting the mathematical solution to a pre-computed matrix that can be efficiently used to compute the error on the fly.

The final result is a voxel structure that stores the surface data in differently sized voxels, so multiple levels of detail in a single model. We used a user defined tolerance as measure for the simplification to balance the surface approximation versus the memory consumption.

The image below shows our final result. These six models combined in SVO format would consume 12GB of data. With our adaptive simplification method we can store them in similar quality in only 2GB.

If you would like to know more, I encourage you to read my previous posts on topics of path tracing and the master thesis updates. I explain how I made these images above.

Here is a small video showing how the scene above is rendered in the voxel path tracer:

You can download the full paper here.

No comments:

Post a Comment