My interest in this demo is, apart from the photorealistic quality, based on two facts: the GPU ray tracing and the voxel based rendering. Never before have I seen a raytraced (CPU or GPU) scene of this scope and quality in realtime. Urbach has stated in the video's that his raytracing algorithm is not 100% fully accurate, but nevertheless I think it looks absolutely amazing.
At Siggraph 2008, there will be a panel discussion on realtime ray tracing, where Jules Urbach will be the special guest. Hopefully, there will be more info on the ray tracing part then.
On to the voxels...
In March of this year, John Carmack stated in an interview that he was investigating a new rendering technique for his next generation engine (id Tech 6), which involves raycasting into a sparse voxel octree. This has spurred renewed interest in voxel rendering and parallels with the new Ruby demo are quickly drawn.
Today's GPU are already blazingly fast when it comes to polygon rendering and don't break a sweat in the multimillion triangle scenes of Crysis. So there must be a good reason why some developers are spending time and energy on voxel rendering. John Carmack explains it like this in the interview:
Jon Olick, programmer at id Software, provided some interesting details about the sparse voxel octree raycasting in this ompf thread. He will also give a talk on the subject at Siggraph.It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues. So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner. Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure. It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.
In the ompf thread, there are also a number of interesting links to research papers about voxel octree raycasting:
A single-pass GPU ray casting framework for interactive out-of-core rendering of massive volumetric datasets Enrico Gobbetti, Fabio Marton, and José Antonio Iglesias Guitián 2008
http://www.crs4.it/vic/cgi-bin/bib-page.cgi?id=
Interactive Gigavoxels, Cyril Crassin, Fabrice Neyret, Sylvain Lefebvre 2008
http://artis.imag.fr/Publications/2008/CNL08/
Ray tracing into voxel compressed into an octree http://www.sci.utah.edu/~wald/Publications/2007///MROct/download//mroct.pdf
The octree texture Sylvain Lefebvre
http://lefebvre.sylvain.free.fr/octreetex/
The difference between id Tech 6 and Otoy is the way the voxels are rendered: id's sparse voxel octree tech is about voxel ray casting (primary rays only), while Otoy does voxel raytracing, which allows for raytraced reflections and possibly even raytraced shadows and photon mapping.