Download executable: http://code.google.com/p/tokap-the-once-known-as-pong/downloads/list
Increasing the emission values (RGB) of the "sun" to 4, 4, 2 makes the caustics more obvious:
Quasi-random, more or less unbiased blog about real-time photorealistic GPU rendering
- Crysis 2 rendered in the cloud at the highest settings and streamed to an iPad using OTOY's tech
- games can be rendered for 16 concurrent users with a single GPU
- (around the 16:00 mark) path tracing!!! A very short clip was shown where Jules manipulates an extremely high detail model from the Transformers movie (created by ILM) on an iPhone in real-time, rendered in the cloud with path tracing and displayed at 60 fps. Path tracing will scale to as many servers as are available. This will really revolutionize the way games and films are made. A blurry picture below:
- Software tools such as Blender will be delivered through the cloud with OTOY
- WebCL! The next logical step after WebGL, which will make the GPU computing power from the cloud accessible through a webbrowser. Very interesting.
- Operating systems, next-gen consoles, Blu-Ray discs will become irrelevant when all apps run in the cloud
- The same assets from the Gaiking movie (to be released next year), will be used in a Gaiking game that can only be played on the cloud due to the massive computing resources it will require for rendering the graphics in real-time. Tantalizing... :-D
Screen from the Gaiking teaser trailer:
"Rendered with PyBlenderSpud (RenderSpud Blender plugin). 16 samples/pixel path tracing at 720x480, 15-20 seconds per frame to render the 70-frame sequence."The image quality for just 16 samples/pixel looks great and 20 seconds per frame is not bad for that resolution, especially when keeping in mind that this is just CPU path tracing. If this would be optimized and ported to the GPU, it could probably reach rendertimes of 1 second per frame or less (at 720x480 and 16 samples/pixel) when using multiple GPUs.
"Thus, we don't expect Project Denver to appear before late 2012 or early 2013 - in line with Maxwell GPU architecture, which is expected to integrate Project Denver architecture and become the first shipping GPU which could boot an operating system. It would not be the first GPU to boot an operating system, though. According to several PR representatives, the company already managed to boot a special build of Linux using Fermi GPU, but resources for that were abandoned as it proved too much of a hassle."
"In theory, Project Denver cores inside the Maxwell GPU die should enjoy access to 2+TB/s of internal bandwidth and potentially beyond currently possible 320GB/s of external memory bandwidth (using 512-bit interface and high-speed GDDR5 memory). If nVidia delivers this architecture as planned, we might see quite a change in the market - given that neither CPUs from AMD or Intel don't have as high system bandwidth as contemporary graphics cards."
"I think Jacopo Pantaleoni's "HLBVH" paper at High Performance Graphics this year will be looked back on as a watershed for ray tracing of dynamic content. He can sort 1M utterly dynamic triangles into a quality acceleration structure at real-time rates, and we think there's more headroom for improvement. So to answer your question, with techniques like these and continued advances in GPU ray traversal, I would expect heavy ray tracing of dynamic content to be possible in a generation or two."