Showing posts with label ray tracing. Show all posts
Showing posts with label ray tracing. Show all posts

Tuesday, October 30, 2012

Real-time GPU path tracing: Octane Render getting started videos

This is an awesome video showcasing the extreme speed at which Octane Render is able to produce photorealistic images with path tracing (and it's also a nice introduction to Octane). Coffeemills will never be the same after you've seen this:


And this one shows off the powerful material system in Octane, and it's all rendered at full photorealistic quality in real-time:

 

A short real-time test in Stonemason's Backstreets:

Friday, May 6, 2011

CBox Unbiased Truck


I've modified the scene in the Kajiya path tracer a bit more: it now consists of a Cornell Box out of axis aligned boxes with the (in)famous truck from Unbiased Truck Soccer:



Color bleeding from the red and green wall:



The screenshots were rendered with 8600M GT (6 fps default view). On a GTS 450, the demo runs at 70 fps in default view. It should run at >200 fps on a GTX 580 with 8 samples per pixel. This new path tracer is just incredible fun, I can't stop messing with it.

Executable and source code at http://code.google.com/p/tokap-the-once-known-as-pong/downloads/list


UPDATE: a more challenging lighting set up with an open box only illuminated by the sky:



The truck seen from behind, indirectly lit by skylight bounced off the back and side walls. As expected with standard path tracing, the noise is a lot worse in this scenario. Bidirectional path tracing should converge faster using fewer samples.

Thursday, April 15, 2010

Real-time pathtracing demo shows future of game graphics

Yessss!!! I've been anticipating this for a long time: real-time raytraced high-quality dynamic global illumination which is practical for games. Until now, the image quality of every real-time raytracing demo that I've seen in the context of a game was deeply disappointing:

- Quake 3 raytraced (http://www.youtube.com/watch?v=bpNZt3yDXno),
- Quake 4 raytraced (http://www.youtube.com/watch?v=Y5GteH4q47s),
- Quake Wars raytraced (http://www.youtube.com/watch?v=mtHDSG2wNho) (there's a pattern in there somewhere),
- Outbound (http://igad.nhtv.nl/~bikker/projects.htm),
- Let there be light (http://www.youtube.com/watch?v=33yrCV25A14,
- the last Larrabee demo showing an extremely dull Quake Wars scene (a raytraced floating boat in a mountainous landscape, with some flying vehicles roaring over, Intel just showed a completely motionless scene, too afraid of revealing the low framerate when navigating)http://www.youtube.com/watch?v=b5TGA-IE85o,
- the Nvidia demo of the Bugatti at Siggraph 2008 (http://www.youtube.com/watch?v=BAZQlQ86IB4)

All of these demo's lack one major feature: realtime dynamic global illumination. They just show Whitted raytracing, which makes the lighting look flat and dull and which quality-wise cannot seriously compete with rasterization (which uses many tricks to fake GI such as baked GI, SSAO, SSGI, instant radiosity, precomputed radiance transfer and sperical harmonics, Crytek's light propagation volumes, ...).

The above videos would make you believe that real-time high quality dynamic GI is still out for an undetermined amount of time. But as the following video shows, that time is much closer than you would think: http://www.youtube.com/watch?v=dKZIzcioKYQ

The technology demonstrated in the video is developed by Jacco Bikker (Phantom on ompf.org, who also developed the Arauna game engine which uses realtime raytracing) and shows a glimpse of the future of graphics: real-time dynamic global illumination through pathtracing (probably bidirectional), computed on a hybrid architecture (CPU and GPU) achieving ~40 Mrays/sec on a Core i7 + GTX260. There's a dynamic floating object and each frame accumulates 8 samples/pixel before being displayed. There's caustics from the reflective ring, cube and cylinder as well as motion blur. The beauty of path tracing is that it inherently provides photorealistic graphics: there's no extra coding effort required to have soft shadows, reflections, refractions and indirect lighting, it all works automagically (it also handles caustics, but not very efficiently though). The photorealism is already there, now it's just a matter of speeding it up through code optimization, new algorithms (stochastic progressive photon mapping, Metropolis Light Transport, ...) and of course better hardware (CPU and GPU).

The video is imo a proof of concept of the feasibility of realtime pathtraced games: despite the low resolution, low framerate, low geometric complexity and the noise there is an undeniable beauty about the unified global lighting for static and dynamic objects. I like it very, very much. I think a Myst or Outbound-like game would be ideally suited to this technology: it's slow paced and you often hold still for inspecting the scene looking for clues (so it's very tolerant to low framerates) and it contains only a few dynamic objects. I can't wait to see the kind of games built with this technology. Photorealistic game graphics with dynamic high-quality global illumination for everything are just a major step closer to becoming reality.

UPDATE: I've found a good mathematical explanation for the motion blur you're seeing in the video, that was achieved by averaging the samples of 4 frames (http://www.reddit.com/r/programming/comments/brsut/realtime_pathtracing_is_here/):
it is because there is too much variance in lighting in this scene for the numbers of samples the frames take to integrate the rendering equation (8; typically 'nice' results starts at 100+ samples/per pixel). Therefore you get noise which (if they implemented their pathtracer correctly) is unbiased. Which means in turn that the amount of noise is proportional to the inverse of the square of number of samples. By averaging over 4 frames, they half the noise as long as the camera is not moving.


UPDATE2: Jacco Bikker uploaded a new, even more amazing video to youtube showing a rotating light globally illuminating the scene with path tracing in real-time at 14-18 fps (frame time: 55-70 ms)!
http://www.youtube.com/watch?v=Jm6hz2-gxZ0&playnext_from=TL&videos=ZkGZWOIKQV8

The frame averaging trick must have been used here too, because 6 samples per pixel cannot possibly give such good quality.

Saturday, August 2, 2008

Voxel ray tracing vs polygon ray tracing

Carmack's thoughts about ray tracing:


I think that ray tracing in the classical sense, of analytically intersecting rays with conventionally defined geometry, whether they be triangle meshes or higher order primitives, I’m not really bullish on that taking over for primary rendering tasks which is essentially what Intel is pushing. But, I do think that there is a very strong possibility as we move towards next generation technologies for a ray tracing architecture that uses a specific data structure, rather than just taking triangles like everybody uses and tracing rays against them and being really, really expensive. It
involves ray tracing into a sparse voxel octree which is essentially a geometric evolution of the mega-texture technologies that we’re doing today for uniquely texturing entire worlds. It’s clear that what we want to do in the following generation is have unique geometry down to the equivalent of the texel across everything.

There are some interesting things to note in there:

- ray tracing in the classical sense, in which rays intersect with triangles, is far too expensive for use in games, even with next generation hardware

- the sparse voxel octree format permits unique geometry


Octrees can be used to accelerate ray tracing and store geometry in a compressed format at the same time.

Quote from a game developer (Rare) on the voxel octree:


Storing data in an octree is far more efficient than storing it using textures and polygons (it's basically free compression for both geometry and texture data). It's primarily cool because you stop traversing when the size of the pixel is larger than the projected cell, so you don't even need to have all your data in memory, but can stream it in on demand. This means that the amount of data truly is unlimited, or at least the limits are with the artists producing it. You only need a fixed amount of voxels loaded to view a scene, and that doesn't change regardless of how big the scene is. The number of voxels required is proportional to the number of pixels on the screen. This is true regardless of how much data you're rendering! This is not true for rasterization unless you have some magical per-pixel visibility and LOD scheme to cut down the number of pixels and vertices to process, which is impossible to achieve in practice. Plus ray casting automatically gives you exact information on what geometry needs to be loaded in from disk, so it's a "perfect" streaming system,
wheras with rasterization it would be very difficult to incrementally load a scene depending on what's visible (because you need to load the scene before you know what's visible!)
If you want to model micrometer detail, go ahead, it won't be loaded into memory until someone zooms in close enough to see it. Voxels that are not intersected can be thrown out of memory. Of course you would keep some sort of cache and throw things out on a least recently used basis, but since it's hierarchical you can just load in new levels in the hierarchy only when you hit them.


Voxels have some very interesting benefits compared to polygons:

- It's a volumetric representation, so you can model very fine details and bumps, without the need for bump mapping. Particle effects like smoke, fire and foam can be efficiently rendered without using hacks. Voxels are also being used by some big Hollywood special effects studio's to render hair, fur and grass.

- id wants to use voxels to render everything static with real geometry without using normal maps.

- Voxels can store a color and a normal. For the renderer, textures and geometry are essentially the same.

- The position of the voxel is defined implicitely by the structure that is holding it (the octree). Here's the good part: this structure represents both the primitives that need to be intersected and the spatial division of these primitives. So, in contrast to triangle ray tracing which needs a separate spatial division structure (kd-tree, BVH, ...), voxels are right away structured in a grid or an octree (this does not mean that other structures can't be used as well). So for voxel ray tracing, octrees are perfect.

- Voxels are very cheap primitives to intersect, much cheaper than triangles. This is probably their biggest benefit when choosing between voxel and polygon ray tracing.

- A voxel octree permits a very natural multiresolution. There's no need to go deeper into the octree when the size of a pixel is larger than the underlying cell, so you don't have to display detail if it's not necessary and you don't streal in data that isn't visible either way.

- Voxels are extremely well suited for local effects (voxel ray casting). In contrast to triangle rasterization, there are no problems with transparency, refraction, ... There are also major benefits artwise: because voxels are volumetric, you can achieve effects like erosion, aging materials, wear and tear by simply changing the iso value.

- Ray casting voxels is much less sensitive to scene complexity than triangles

(partly translated from http://forum.canardplus.com/showpost.php?p=1257790&postcount=96)


Disadvantages of voxel ray tracing vs polygon ray tracing:

- Memory. Voxel data sets are huge relative to polygon data. But this doesn't have to be a problem, since all data can be streamed in. This does however create new challenges when the point of view changes rapidly and a lot of new data bricks have to be streamed in at once. Voxels sets have the benefit over polygons that voxel subsets can be loaded in, which permits some sort of progressive refinement. Other possible solutions are: using faster hard disks or solid state drives to accelerate the streaming, limiting depth traversal during fast camera movement or masking the streaming with motion blur or depth of field postprocessing.

- Animation of voxels requires specialized tools

- Disadvantages of ray tracing in general: dynamic objects require the octree to be updated in realtime. However, there are solutions for dynamic objects which don't require updating of the octree (such as building a deformation lattice around dynamic objects so that when you raycast into it bend the rays as it hits the deformation lattice). id Tech 6 plans to tackle the problem of having many dynamic objects with hybrid rendering.

More on dynamic raytracing:

Dynamic Acceleration Structures for Interactive Ray Tracing, Reinhard, E., Smits, B., and Hansen, C., in Proc. Eurographics Workshop on Rendering, pp. 299-306, June 2000. Summary: This system uses a grid data structure, allowing dynamic objects to be easily inserted or removed. The grid is tiled in space (i.e. it wraps around) to avoid problems with fixed boundaries. They also implement a hiearchical grid with data in both internal and leaf nodes; objects are inserted into the optimal level.

Towards Rapid Reconstruction for Animated Ray Tracing, Lext and Akenine-Moller, Eurographics 2001. Summary: Each rigid dynamic object gets its own grid acceleration structure, and rays are transformed into this local coordinate system. Surprisingly, they show that this scheme is not a big win for simple scenes, because in simple scenes it is possible to completely rebuild the grid each frame using only about a quarter of the runtime. But, this would probably not be true for a k-d or BSP tree.

Distributed Interactive Ray Tracing of Dynamic Scenes, Wald, Benthin, and Slusallek, Proc. IEEE Symp. on Parallel and Large-Data Visualization and Graphics (PVG), 2003. Summary: This system uses ray transformation (into object coordinate system) for rigid movement, and BSP rebuild for unstructured movement. A top-level BSP tree is rebuilt every frame to hold bounding volumes for the moving objects. Performance is still an issue for unstructured movement.

Interactive Space Deformation with Hardware Assisted Rendering, IEEE Computer Graphics and Applications, Vol 17, no 6, 1997, pp. 66-77. Summary: Instead of deforming objects directly, this system deforms the space in which they reside (using 1-to-1 deformations). During raytracing, the rays are deformed into the object space instead of deforming the objects into the ray space. However, the resulting deformed rays are no longer straight, so they must be discretized into short line segments to perform the actual ray-object intersection tests.


Ray casting free-form deformed-volume objects, Haixin Chen, Jürgen Hesser, Reinhard Männer A collection of techniques is developed in this paper for ray casting free-form deformed-volume objects with high quality and efficiency. The known inverse ray deformation approach is combined with free-form deformation to bend the rays to the opposite direction of the deformation, producing an image of the deformed volume without generating a really deformed intermediate volume. The local curvature is estimated and used for the adaptive selection of the length of polyline segments, which approximate the inversely deformed ray trajectories; thus longer polyline segments can be automatically selected in regions with small curvature, reducing deformation calculation without losing the spatial continuity of the simulated deformation. We developed an efficient method for the estimation of the local deformation function. The Jacobian of the local deformation function is used for adjustments of the opacity values and normal vectors computed from the original volume, guaranteeing that the deformed spatial structures are correctly rendered. The popular ray casting acceleration techniques, like early ray termination and space leaping, are incorporated into the deformation procedure, providing a speed-up factor of 2.34-6.56 compared to the non-optimized case.



More info on id Tech 6 and voxel ray casting in the ompf thread

Friday, August 1, 2008

id, Voxels and Ray Tracing

According to this article, the full Ruby demo will be shown to the public at Siggraph 2008.

My interest in this demo is, apart from the photorealistic quality, based on two facts: the GPU ray tracing and the voxel based rendering. Never before have I seen a raytraced (CPU or GPU) scene of this scope and quality in realtime. Urbach has stated in the video's that his raytracing algorithm is not 100% fully accurate, but nevertheless I think it looks absolutely amazing.
At Siggraph 2008, there will be a panel discussion on realtime ray tracing, where Jules Urbach will be the special guest. Hopefully, there will be more info on the ray tracing part then.


On to the voxels...
In March of this year, John Carmack stated in an interview that he was investigating a new rendering technique for his next generation engine (id Tech 6), which involves raycasting into a sparse voxel octree. This has spurred renewed interest in voxel rendering and parallels with the new Ruby demo are quickly drawn.

Today's GPU are already blazingly fast when it comes to polygon rendering and don't break a sweat in the multimillion triangle scenes of Crysis. So there must be a good reason why some developers are spending time and energy on voxel rendering. John Carmack explains it like this in the interview:


It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues. So we have incredibly high numbers; billions of triangles of data that you store in a very efficient manner. Now what is different about this versus a conventional ray tracing architecture is that it is a specialized data structure that you can ray trace into quite efficiently and that data structure brings you some significant benefits that you wouldn’t get from a triangular structure. It would be 50 or 100 times more data if you stored it out in a triangular mesh, which you couldn’t actually do in practice.

Jon Olick, programmer at id Software, provided some interesting details about the sparse voxel octree raycasting in this ompf thread. He will also give a talk on the subject at Siggraph.

In the ompf thread, there are also a number of interesting links to research papers about voxel octree raycasting:

A single-pass GPU ray casting framework for interactive out-of-core rendering of massive volumetric datasets Enrico Gobbetti, Fabio Marton, and José Antonio Iglesias Guitián 2008
http://www.crs4.it/vic/cgi-bin/bib-page.cgi?id=

Interactive Gigavoxels, Cyril Crassin, Fabrice Neyret, Sylvain Lefebvre 2008
http://artis.imag.fr/Publications/2008/CNL08/

Ray tracing into voxel compressed into an octree http://www.sci.utah.edu/~wald/Publications/2007///MROct/download//mroct.pdf

The octree texture Sylvain Lefebvre
http://lefebvre.sylvain.free.fr/octreetex/

The difference between id Tech 6 and Otoy is the way the voxels are rendered: id's sparse voxel octree tech is about voxel ray casting (primary rays only), while Otoy does voxel raytracing, which allows for raytraced reflections and possibly even raytraced shadows and photon mapping.

Otoy, Transformers and Ray Tracing

After the Ruby/LightStage demo, 4 other video's appeared as part of an article about Otoy on TechCrunch. Urbach explains that he started experimenting with Renderman code on graphics hardware during the making of Cars in 2005. This work caught the interest from ILM, who gave Urbach the models from the Transformer movie to render in realtime. Urbach and his team made 4 commercials for the Transformer movie that were rendered and directed in realtime on graphics hardware. Afterwards, he was contacted by Sony to work on the Spiderman movie.



The 4 video's:


Video 1 OTOY Demo

This video shows short clips of realtime rendered Transformer sequences



Video 2 Jules Urbach explains OTOY's real-time graphics rendering

In this video, Urbach talks about his experiments with GPU ray tracing in 2005, the Transformers trailers and the voxel raytracing for the Ruby demo. For the tests with Cars in 2005, he was able to do "realtime raytraced reflections with up to 20 bounces of light". He also implemented some realtime global illumination technique. For the new Ruby demo, he is actually "raytracing the entire scene", and "not using the vertex pipeline anymore". Thanks to the voxel rendering "the level of detail becomes infinite".



Video 3 OTOY Graphics Rendered in the Browser

This video shows the server side rendering capabilities of Otoy. It shows Urbach interacting with scenes from the Transformers trailers, that are being rendered in realtime on his GPU servers and streamed over the net into the browser.

Urbach mentions "raytraced reflections on the windows". When he switches to nighttime, he says "in this particular demo, there's no baked lighting, nothing is precomputed", there are "hundreds of lights in the building rendered in realtime".

The demo runs on three graphics cards (3x RV770): one card renders the ILM Optimus, second card renders the G1 Optimus Prime and the third card renders the city and the raytraced reflections on the windows.



Video 4 Jules Urbach of OTOY Explains LightStage

Video about LightStage, slightly more elaborate than this one.



There is also a video of the full AMD Cinema 2.0 event in which Urbach talks a bit about ray tracing on GPU's (from 41:00 to 47:00) and goes a bit more in-depth during the Q&A session (from 72:00 to 88:00):



- Urbach has been talking to game publishers to start integrating the relighting part of Otoy in existing game engines

- Otoy can do full raytracing, but also supports hybrid rendering. It can convert any polygonal mesh to voxels

- The Ruby demo does not use any polygons, only voxels

- For games, Urbach thinks hybrid rendering will be the way to go "for a very long time"

- With this technology, game developers will require a different way of working. Basically they're saying that you can make a photorealistic game, but the workload on the artist side will be astronomous

- In 2005, Urbach started out writing approximations to Renderman code during the making of Cars. At the time, he used cheats for ray tracing and reflections. In three years, GPU’s have evolved so quickly that the latest hardware makes realtime ray tracing possible that is “99 % accurate”

- Voxel data sets are huge, but with voxel based rendering you can load only subsets of the voxel space, which is not possible with polygons. You can also choose which texture layers to load

- Compression and decompression of the voxel data is CPU bound. What takes 3 seconds to decompress on a CPU, can be done at a “thousand frames per second” on a GPU.

- What's interesting according to Urbach is that in 2005 he started out writing approximations to ray tracing, but the latest generation of hardware allows him to do ray tracing that gets really close to the 100% point





Urbach also showed another Otoy demo at the AMD event, called Bug Snuff. It shows a photorealistic scene with a scorpion, rendered in realtime and directed by David Fincher. Really impressive stuff!






Lastly, the ompf thread where it all started: http://ompf.org/forum/viewtopic.php?f=6&t=882

Thanks to all the ompf members and guests who participated and contributed to the thread.