Showing posts with label dynamic. Show all posts
Showing posts with label dynamic. Show all posts

Thursday, April 15, 2010

Real-time pathtracing demo shows future of game graphics

Yessss!!! I've been anticipating this for a long time: real-time raytraced high-quality dynamic global illumination which is practical for games. Until now, the image quality of every real-time raytracing demo that I've seen in the context of a game was deeply disappointing:

- Quake 3 raytraced (http://www.youtube.com/watch?v=bpNZt3yDXno),
- Quake 4 raytraced (http://www.youtube.com/watch?v=Y5GteH4q47s),
- Quake Wars raytraced (http://www.youtube.com/watch?v=mtHDSG2wNho) (there's a pattern in there somewhere),
- Outbound (http://igad.nhtv.nl/~bikker/projects.htm),
- Let there be light (http://www.youtube.com/watch?v=33yrCV25A14,
- the last Larrabee demo showing an extremely dull Quake Wars scene (a raytraced floating boat in a mountainous landscape, with some flying vehicles roaring over, Intel just showed a completely motionless scene, too afraid of revealing the low framerate when navigating)http://www.youtube.com/watch?v=b5TGA-IE85o,
- the Nvidia demo of the Bugatti at Siggraph 2008 (http://www.youtube.com/watch?v=BAZQlQ86IB4)

All of these demo's lack one major feature: realtime dynamic global illumination. They just show Whitted raytracing, which makes the lighting look flat and dull and which quality-wise cannot seriously compete with rasterization (which uses many tricks to fake GI such as baked GI, SSAO, SSGI, instant radiosity, precomputed radiance transfer and sperical harmonics, Crytek's light propagation volumes, ...).

The above videos would make you believe that real-time high quality dynamic GI is still out for an undetermined amount of time. But as the following video shows, that time is much closer than you would think: http://www.youtube.com/watch?v=dKZIzcioKYQ

The technology demonstrated in the video is developed by Jacco Bikker (Phantom on ompf.org, who also developed the Arauna game engine which uses realtime raytracing) and shows a glimpse of the future of graphics: real-time dynamic global illumination through pathtracing (probably bidirectional), computed on a hybrid architecture (CPU and GPU) achieving ~40 Mrays/sec on a Core i7 + GTX260. There's a dynamic floating object and each frame accumulates 8 samples/pixel before being displayed. There's caustics from the reflective ring, cube and cylinder as well as motion blur. The beauty of path tracing is that it inherently provides photorealistic graphics: there's no extra coding effort required to have soft shadows, reflections, refractions and indirect lighting, it all works automagically (it also handles caustics, but not very efficiently though). The photorealism is already there, now it's just a matter of speeding it up through code optimization, new algorithms (stochastic progressive photon mapping, Metropolis Light Transport, ...) and of course better hardware (CPU and GPU).

The video is imo a proof of concept of the feasibility of realtime pathtraced games: despite the low resolution, low framerate, low geometric complexity and the noise there is an undeniable beauty about the unified global lighting for static and dynamic objects. I like it very, very much. I think a Myst or Outbound-like game would be ideally suited to this technology: it's slow paced and you often hold still for inspecting the scene looking for clues (so it's very tolerant to low framerates) and it contains only a few dynamic objects. I can't wait to see the kind of games built with this technology. Photorealistic game graphics with dynamic high-quality global illumination for everything are just a major step closer to becoming reality.

UPDATE: I've found a good mathematical explanation for the motion blur you're seeing in the video, that was achieved by averaging the samples of 4 frames (http://www.reddit.com/r/programming/comments/brsut/realtime_pathtracing_is_here/):
it is because there is too much variance in lighting in this scene for the numbers of samples the frames take to integrate the rendering equation (8; typically 'nice' results starts at 100+ samples/per pixel). Therefore you get noise which (if they implemented their pathtracer correctly) is unbiased. Which means in turn that the amount of noise is proportional to the inverse of the square of number of samples. By averaging over 4 frames, they half the noise as long as the camera is not moving.


UPDATE2: Jacco Bikker uploaded a new, even more amazing video to youtube showing a rotating light globally illuminating the scene with path tracing in real-time at 14-18 fps (frame time: 55-70 ms)!
http://www.youtube.com/watch?v=Jm6hz2-gxZ0&playnext_from=TL&videos=ZkGZWOIKQV8

The frame averaging trick must have been used here too, because 6 samples per pixel cannot possibly give such good quality.

Monday, October 6, 2008

Dynamic voxels

In the past couple of weeks, I've learned that there are different methods to efficiently store voxels for GPU raycasting:

- Octrees

- Geometry images (Hoppe 2002, Carr et al., 2006)

- Spatial hashes

- Hybrid aceleration structures such as an octree with bricks (Crassin et al., 2008)


Jules Urbach said in an article on TechCrunch (http://www.techcrunch.com/2008/08/20/the-truth-behind-liveplaces-photo-realistic-3d-world-and-otoys-rendering-engine/):


We store voxel data in several ways, including geometry maps (see our Siggraph or Iceland presentations, where we show this method applied to the Ligthstage 5 structured light data, courtesy Andrew Jones ICT/Graphics lab)


Lightstage 5 is being used to capture performances of real actors in polygon based animations, which are then converted to voxels and stored in geometry maps (or geometry images, see A Brief Overview of Geometry Maps). So there is at least one way to render characters and dynamic objects through voxel raycasting, without the need for hybrid techniques. The paper from Carr et al. (Fast GPU Ray Tracing of Dynamic Meshes using Geometry Images) shows that "interactive" raycasting of dynamic objects is possible "at no extra cost". They use geometry maps to store triangles however instead of voxels. With this method, it's feasible to have extremely detailed characters raycasted in realtime.
Spatial hash maps could possibly be used as for dynamics as well.


In the TechCrunch article Jules Urbach gives some more info on the rendering methods behind OTOY and the Ruby voxel demo:

- The datasets from the BCN and Ruby city scenes contain up to 64 data layers per voxel, including diffuse albedo, fresnel reflectance values, irradiance data, UV coordinates (up to 8 sets), normals, and, for static scenes, look up vectors for 1-20 bounces of light from up to 252 evenly distributed viewpoints (it is important to note that this data is always 100% optional, as the raycaster can do this procedurally when the voxels are close and reflection precision is more important than speed; however, with cached reflectance data, you might see the scene rendering at 100s-1000s of fps when the scene isn’t changing).

- A note on raytracing vs. rasterization: amplifying the tree trunk in Fincher’s Bug Snuff demo to 28 million polys using the GPU tessellator turned out to be faster than rendering a 28 million voxel point cloud for this object. So there is a threshold where voxels become faster than rasterziation at about 100 million polys. At least in our engine, on R7xx GPUs, using full precision raycasting at 1280×720. Below that point, traditional rasterization using the GPU tessellator seems to be faster for a single viewport.

- The engine can convert a 1 million poly mesh into voxel data in about 1/200th second on R770 (60 fps on R600 and 8800 GTX). This is useful for baking dense static scenes that are procedurally generated once, or infrequently, on the GPU. That is why some of the OTOY demos require the GPU tessellator to look right.

- Hard shadows in OTOY were done using rasterization until we got R770 in May. Now hard shadows, like reflections, can be calculated using raycasting, although shadow masks are still very useful, and raycasting with voxel data can still give you aliasing.

- We can use the raycaster with procedurally generated data (perlin generated terrain or clouds, spline based objects etc.). At Jon Peddie’s Siggraph event, we showed a deformation applied in real time to the Ruby street scene. It was resolution independent, like a Flash vector object, so you could get infinitely close to it with no stair stepping effects, and likewise, the shadow casting would work the same way.

- The voxel data is grouped into the rough equivalent of ‘triangle batches’ (which can be indexed into per object or per material groups as well). This allows us to work with subsets of the voxel data in the much the same way we do with traditional polygonal meshes.

- The reflections in the march 2007 ‘Treo’ video are about 1/1000th as precise/fast as the raycasting we now use for the Ruby demo on R770/R700.

- One R770 GPU can render about 100+ viewports at the quality and size shown in the ‘Treo’ video. When scenes are entirely voxel based, the number of simultaneous viewports is less important than the total rendered area of all the viewports combined.

- The server side rendering system is currently comprised of systems using 8x R770 GPUs ( 8 Gb VRAM, 1.5 Kw power per box).


The full Ruby demo: http://www.youtube.com/watch?v=sWgQp_LL-Cg

High quality download: http://blip.tv/file/get/Ubergizmo-AMDR700RubyDemo193.mov