Showing posts with label voxels. Show all posts
Showing posts with label voxels. Show all posts

Monday, October 6, 2008

Dynamic voxels

In the past couple of weeks, I've learned that there are different methods to efficiently store voxels for GPU raycasting:

- Octrees

- Geometry images (Hoppe 2002, Carr et al., 2006)

- Spatial hashes

- Hybrid aceleration structures such as an octree with bricks (Crassin et al., 2008)


Jules Urbach said in an article on TechCrunch (http://www.techcrunch.com/2008/08/20/the-truth-behind-liveplaces-photo-realistic-3d-world-and-otoys-rendering-engine/):


We store voxel data in several ways, including geometry maps (see our Siggraph or Iceland presentations, where we show this method applied to the Ligthstage 5 structured light data, courtesy Andrew Jones ICT/Graphics lab)


Lightstage 5 is being used to capture performances of real actors in polygon based animations, which are then converted to voxels and stored in geometry maps (or geometry images, see A Brief Overview of Geometry Maps). So there is at least one way to render characters and dynamic objects through voxel raycasting, without the need for hybrid techniques. The paper from Carr et al. (Fast GPU Ray Tracing of Dynamic Meshes using Geometry Images) shows that "interactive" raycasting of dynamic objects is possible "at no extra cost". They use geometry maps to store triangles however instead of voxels. With this method, it's feasible to have extremely detailed characters raycasted in realtime.
Spatial hash maps could possibly be used as for dynamics as well.


In the TechCrunch article Jules Urbach gives some more info on the rendering methods behind OTOY and the Ruby voxel demo:

- The datasets from the BCN and Ruby city scenes contain up to 64 data layers per voxel, including diffuse albedo, fresnel reflectance values, irradiance data, UV coordinates (up to 8 sets), normals, and, for static scenes, look up vectors for 1-20 bounces of light from up to 252 evenly distributed viewpoints (it is important to note that this data is always 100% optional, as the raycaster can do this procedurally when the voxels are close and reflection precision is more important than speed; however, with cached reflectance data, you might see the scene rendering at 100s-1000s of fps when the scene isn’t changing).

- A note on raytracing vs. rasterization: amplifying the tree trunk in Fincher’s Bug Snuff demo to 28 million polys using the GPU tessellator turned out to be faster than rendering a 28 million voxel point cloud for this object. So there is a threshold where voxels become faster than rasterziation at about 100 million polys. At least in our engine, on R7xx GPUs, using full precision raycasting at 1280×720. Below that point, traditional rasterization using the GPU tessellator seems to be faster for a single viewport.

- The engine can convert a 1 million poly mesh into voxel data in about 1/200th second on R770 (60 fps on R600 and 8800 GTX). This is useful for baking dense static scenes that are procedurally generated once, or infrequently, on the GPU. That is why some of the OTOY demos require the GPU tessellator to look right.

- Hard shadows in OTOY were done using rasterization until we got R770 in May. Now hard shadows, like reflections, can be calculated using raycasting, although shadow masks are still very useful, and raycasting with voxel data can still give you aliasing.

- We can use the raycaster with procedurally generated data (perlin generated terrain or clouds, spline based objects etc.). At Jon Peddie’s Siggraph event, we showed a deformation applied in real time to the Ruby street scene. It was resolution independent, like a Flash vector object, so you could get infinitely close to it with no stair stepping effects, and likewise, the shadow casting would work the same way.

- The voxel data is grouped into the rough equivalent of ‘triangle batches’ (which can be indexed into per object or per material groups as well). This allows us to work with subsets of the voxel data in the much the same way we do with traditional polygonal meshes.

- The reflections in the march 2007 ‘Treo’ video are about 1/1000th as precise/fast as the raycasting we now use for the Ruby demo on R770/R700.

- One R770 GPU can render about 100+ viewports at the quality and size shown in the ‘Treo’ video. When scenes are entirely voxel based, the number of simultaneous viewports is less important than the total rendered area of all the viewports combined.

- The server side rendering system is currently comprised of systems using 8x R770 GPUs ( 8 Gb VRAM, 1.5 Kw power per box).


The full Ruby demo: http://www.youtube.com/watch?v=sWgQp_LL-Cg

High quality download: http://blip.tv/file/get/Ubergizmo-AMDR700RubyDemo193.mov

Friday, August 1, 2008

id, Voxels and Ray Tracing

According to this article, the full Ruby demo will be shown to the public at Siggraph 2008.

My interest in this demo is, apart from the photorealistic quality, based on two facts: the GPU ray tracing and the voxel based rendering. Never before have I seen a raytraced (CPU or GPU) scene of this scope and quality in realtime. Urbach has stated in the video's that his raytracing algorithm is not 100% fully accurate, but nevertheless I think it looks absolutely amazing.
At Siggraph 2008, there will be a panel discussion on realtime ray tracing, where Jules Urbach will be the special guest. Hopefully, there will be more info on the ray tracing part then.


On to the voxels...
In March of this year, John Carmack stated in an interview that he was investigating a new rendering technique for his next generation engine (id Tech 6), which involves raycasting into a sparse voxel octree. This has spurred renewed interest in voxel rendering and parallels with the new Ruby demo are quickly drawn.

Today's GPU are already blazingly fast when it comes to polygon rendering and don't break a sweat in the multimillion triangle scenes of Crysis. So there must be a good reason why some developers are spending time and energy on voxel rendering. John Carmack explains it like this in the interview:


It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues. So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner. Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure. It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Jon Olick, programmer at id Software, provided some interesting details about the sparse voxel octree raycasting in this ompf thread. He will also give a talk on the subject at Siggraph.

In the ompf thread, there are also a number of interesting links to research papers about voxel octree raycasting:

A single-pass GPU ray casting framework for interactive out-of-core rendering of massive volumetric datasets Enrico Gobbetti, Fabio Marton, and José Antonio Iglesias Guitián 2008
http://www.crs4.it/vic/cgi-bin/bib-page.cgi?id=

Interactive Gigavoxels, Cyril Crassin, Fabrice Neyret, Sylvain Lefebvre 2008
http://artis.imag.fr/Publications/2008/CNL08/

Ray tracing into voxel compressed into an octree http://www.sci.utah.edu/~wald/Publications/2007///MROct/download//mroct.pdf

The octree texture Sylvain Lefebvre
http://lefebvre.sylvain.free.fr/octreetex/

The difference between id Tech 6 and Otoy is the way the voxels are rendered: id's sparse voxel octree tech is about voxel ray casting (primary rays only), while Otoy does voxel raytracing, which allows for raytraced reflections and possibly even raytraced shadows and photon mapping.

Otoy, Transformers and Ray Tracing

After the Ruby/LightStage demo, 4 other video's appeared as part of an article about Otoy on TechCrunch. Urbach explains that he started experimenting with Renderman code on graphics hardware during the making of Cars in 2005. This work caught the interest from ILM, who gave Urbach the models from the Transformer movie to render in realtime. Urbach and his team made 4 commercials for the Transformer movie that were rendered and directed in realtime on graphics hardware. Afterwards, he was contacted by Sony to work on the Spiderman movie.



The 4 video's:


Video 1 OTOY Demo

This video shows short clips of realtime rendered Transformer sequences



Video 2 Jules Urbach explains OTOY's real-time graphics rendering

In this video, Urbach talks about his experiments with GPU ray tracing in 2005, the Transformers trailers and the voxel raytracing for the Ruby demo. For the tests with Cars in 2005, he was able to do "realtime raytraced reflections with up to 20 bounces of light". He also implemented some realtime global illumination technique. For the new Ruby demo, he is actually "raytracing the entire scene", and "not using the vertex pipeline anymore". Thanks to the voxel rendering "the level of detail becomes infinite".



Video 3 OTOY Graphics Rendered in the Browser

This video shows the server side rendering capabilities of Otoy. It shows Urbach interacting with scenes from the Transformers trailers, that are being rendered in realtime on his GPU servers and streamed over the net into the browser.

Urbach mentions "raytraced reflections on the windows". When he switches to nighttime, he says "in this particular demo, there's no baked lighting, nothing is precomputed", there are "hundreds of lights in the building rendered in realtime".

The demo runs on three graphics cards (3x RV770): one card renders the ILM Optimus, second card renders the G1 Optimus Prime and the third card renders the city and the raytraced reflections on the windows.



Video 4 Jules Urbach of OTOY Explains LightStage

Video about LightStage, slightly more elaborate than this one.



There is also a video of the full AMD Cinema 2.0 event in which Urbach talks a bit about ray tracing on GPU's (from 41:00 to 47:00) and goes a bit more in-depth during the Q&A session (from 72:00 to 88:00):



- Urbach has been talking to game publishers to start integrating the relighting part of Otoy in existing game engines

- Otoy can do full raytracing, but also supports hybrid rendering. It can convert any polygonal mesh to voxels

- The Ruby demo does not use any polygons, only voxels

- For games, Urbach thinks hybrid rendering will be the way to go "for a very long time"

- With this technology, game developers will require a different way of working. Basically they're saying that you can make a photorealistic game, but the workload on the artist side will be astronomous

- In 2005, Urbach started out writing approximations to Renderman code during the making of Cars. At the time, he used cheats for ray tracing and reflections. In three years, GPU’s have evolved so quickly that the latest hardware makes realtime ray tracing possible that is “99 % accurate”

- Voxel data sets are huge, but with voxel based rendering you can load only subsets of the voxel space, which is not possible with polygons. You can also choose which texture layers to load

- Compression and decompression of the voxel data is CPU bound. What takes 3 seconds to decompress on a CPU, can be done at a “thousand frames per second” on a GPU.

- What's interesting according to Urbach is that in 2005 he started out writing approximations to ray tracing, but the latest generation of hardware allows him to do ray tracing that gets really close to the 100% point





Urbach also showed another Otoy demo at the AMD event, called Bug Snuff. It shows a photorealistic scene with a scorpion, rendered in realtime and directed by David Fincher. Really impressive stuff!






Lastly, the ompf thread where it all started: http://ompf.org/forum/viewtopic.php?f=6&t=882

Thanks to all the ompf members and guests who participated and contributed to the thread.