Sunday, October 19, 2014

Scratch-a-pixel and more

Having left Otoy some time ago and after enjoying  a sweet as holiday, it's time for things new and exciting. Lots of interesting rendering related stuff happened in the past months, below are some of the most fascinating developments in my opinion:

- starting off, there's an excellent online tutorial series on computer graphics (mostly ray tracing) for both beginners and experts called Scratch-a-Pixel. The authors are veterans from the VFX, animation and game industry and have years of experience writing production rendering code like Renderman. The tutorials deal with all the features that are expected from a production renderer and contains a lot of background and insights into the science of light and tips and tricks on how to write performant and well optimized ray tracing code. Rendering concepts like CIE xyY colorspace and esoteric mathematical subjects like discrete Fourier transforms, harmonics and integration of orthonormal polynomials are explained in an easy-to-digest manner. Most tutorials also come with C++ source code. At the moment some sections are missing or incomplete, but the author told me there's a revamp of the website coming very soon... 

- hybrid rendering (rasterization mixed with ray tracing) for games has finally arrived with the recent release of Unreal Engine 4.5 which supports ray traced soft shadows and ambient occlusion via signed distance fields (which can be faster to compute than traditional shadow mapping, but works only for static geometry): https://docs.unrealengine.com/latest/INT/Engine/Rendering/LightingAndShadows/RayTracedDistanceFieldShadowing/index.html 


A nice video of the technique in action: http://www.youtube.com/watch?v=4249b94KtyA
Like voxels or triangles, distance fields are another way to represent scene geometry. Just like voxels, distance fields approximate the scene geometry and are more efficient to trace than triangles to create low frequency effects like soft shadows, ambient occlusion and global illumination that don't require 100% geometric accuracy (and because they have inherent multiresolution characteristics by approximating the scene geometry). Inigo Quilez wrote a few interesting articles on rendering with distance fields (in 2008):

Free penumbra shadows for raymarching distance fields

More on distance fields:
Distance fields in Unreal Engine
Alex Evans from Media Molecule invented a neat trick to approximate AO and GI with distance fields in "Fast Approximations for Global Illumination for Dynamic Scenes"

There's also  a very recent paper about speeding up sphere tracing for rendering of signed distance fields or path tracing: Enhanced sphere tracing 

- one of the most interesting Siggraph 2014 surprises, must be the announcement from Weta (the New Zealand based visual effects studio that created the CG effects for blockbusters like the Lord of the Rings, King Kong, Avatar, Tintin and the Hobbit movies) that they are developing their own production path tracer called Manuka (the Maori name for New Zealand's healing tea tree) in conjunction with Gazebo, a physically plausible realtime GPU renderer. While Manuka has been used to render just a couple of shots in "The Hobbit: the Desolation of Smaug", it will be the main renderer for the next Hobbit film. More details are provided in this extensive fxguide article: http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/ Another surprise was Solid Angle (creators of Arnold) unveiling of an OpenCL accelerated renderer prototype running on the GPU. There's not much info to be found apart from a comment on BlenderArtists.org by Solid Angle's Mike Farnsworth ("This is a prototype written by another Solid Angle employee (not Brecht), and it is not Arnold core itself. It's pretty obvious we're experimenting, though. We've been keeping a close eye on GPUs and have active communication with both AMD and Nvidia (and also obviously Intel). I wouldn't speculate on what the prototype is, or what Brecht is up to, because you're almost certainly going to be wrong.")

- Alex St John, ex-Microsoft and one of the creators of DirectX API, has recently moved to New Zealand and aims to create the next standard API for real-time graphics rendering using CUDA GPGPU technology. More details on his blog  http://www.alexstjohn.com/WP/blog/. His post on his visit to Weta contains some great insights into the CUDA accelerated CG effects created for The Desolation of Smaug. 

- Magic Leap, an augmented reality company founded by a biomedical engineer, recently got an enormous investment from Google and is working with a team at Weta in New Zealand to create imaginative experiences. Info available on the net suggests they are developing a wearable device that directly projects 3d images onto the viewer's retina that seemlessly integrate with the real-life scene via projecting multiple images with a depth offset. Combined with Google Glass it could create games that are grounded in the real world like this: http://vimeo.com/109214393 (augmented reality objects are rendered with Octane Render). 

- the Lab for Animate Technologies at the University of Auckland in New Zealand is doing cutting edge research into the first real-time autonomously animated AI avatar: http://vimeo.com/97186687 
The facial animation is driven in real-time by artificial intelligence using concepts from computational neuroscience and is based on a physiological simulation of the human brain which is incredibly deep and complex (I was lucky to get a behind the scenes look): it includes the information exchange pathways between the retina, the thalamic nuclei and the visual cortex including all the feedback loops and also mimics low level single neuron phenomena such as the release of neurotransmitters and hormones like dopamine, epinephrine and cortisol. All of these neurobiological processes together drive the avatar's thoughts, reactions and facial animation through a very detailed facial muscle system, which is probably the best in the industry (Mark Sagar, the person behind this project, was one of the original creators of the USC Lightstage and pioneered facial capturing and rendering for Weta in King Kong and Avatar). More info on http://thecreatorsproject.vice.com/blog/baby-x-the-intelligent-toddler-simulation-is-getting-smarter-every-day and https://www.youtube.com/watch?v=tfACJcgCGv0. One of the most impressive things I've ever seen and it's something that is actually happening now. 

Wednesday, January 22, 2014

Object-order ray tracing for fully dynamic scenes

Today, the GPU Pro blog posted a very interesting article about a novel technique which seamlessly unifies rasterization and ray tracing based rendering for fully dynamic scenes. The technique entitled "Object-order Ray Tracing for Fully Dynamic Scenes" will be described in the upcoming GPU Pro 5 book (to be released on March 25, 2014 during the GDC conference)  and was developed by Tobias Zirr, Hauke Rehfeld and Carsten Dachsbacher .  

Abstract (taken from http://cg.ibds.kit.edu/ORTFDS.php)
This article presents a method for tracing incoherent secondary rays that integrates well with existing rasterization-based real-time rendering engines. In particular, it requires only linear scene access and supports fully dynamic scene geometry. All parts of the method that work with scene geometry are implemented in the standard graphics pipeline. Thus, the ability to generate, transform and animate geometry via shaders is fully retained. Our method does not distinguish between static and dynamic geometry. Moreover, shading can share the same material system that is used in a deferred shading rasterizer. Consequently, our method allows for a unified rendering architecture that supports both rasterization and ray tracing. The more expensive ray tracing can easily be restricted to complex phenomena that require it, such as reflections and refractions on arbitrarily shaped scene geometry. Steps in rendering that do not require the tracing of incoherent rays with arbitrary origins can be dealt with using rasterization as usual.

This is to my knowledge the first practical implementation of the so-called hybrid rendering technique which mixes ray tracing and rasterization by plugging a ray tracer in an existing rasterization based rendering framework and sharing the traditional graphics pipeline. Since no game developer in his right mind will switch to pure ray tracing overnight, this seems to be the most sensible and commercially viable approach to introduce real ray traced high quality reflections of dynamic objects into game engines in the short term, without having to resort to complicated hacks like screen space raytracing for reflections (as seen in e.g. Killzone Shadow Fall, UE4 tech demos and CryEngine) or cubemap arrays, which never really look right and come with a lot of limitations and artifacts. For example, in this screenshot of the new technique you can see the reflection of the sky, which would simply be impossible with screen space reflections from this camera angle:  


Probably the best thing about this technique is that it works with fully dynamic geometry (accelerating ray intersections by coarsely voxelizing the scene) and - judging from the abstract - with dynamically tesselated geometry as well, which is a huge advantage for DX11 based game engines. It's very likely that the PS4 is capable of real-time raytraced reflections using this technique and when optimized, it could not only be used for rendering reflections and refractions, but for very high quality soft shadows and ambient occlusion as well. 

The ultimate next step would be global illumination with path tracing for dynamic scenes, which is a definite possibility on very high end hardware, especially when combined with another technique from a very freshly released paper (by Ulbrich, Novak, Rehfeld and Dachsbacher) entitled Progressive Visibility Caching for Fast Indirect Illumination which promises a 5x speedup for real-time progressively path traced GI by cleverly caching diffuse and glossy interreflections  (a video can be found here). Incredibly exciting if true!

Monday, December 23, 2013

Real-time rendered animations with OctaneRender 1.5

OctaneRender 1.5 has some really powerful features like support for Alembic animations and a fully scriptable user interface. The Alembic file support allows for real-time rendered animations in the standalone version of Octane for scenes with both rigid and deformable animated geometry. Seeing your animations rendered in final quality in real-time with GI, glossy reflections and everything is a blast:


You may remember the actor in the following video from blockbuster movies like "Ultra high detailed dynamic character test" and "4968 daftly dancing dudes on Stanford bunny". Even in Octane, his dancing prowess remains unrivalled. The actor is made up of 66k triangles and there are 730 clones of him (48 million triangles in total). For every frame of the animation, Octane loads the animated geometry from an Alembic file, builds the scene and renders the animation sequence with a script, all in real-time. 



Some examples of rigid body animations:




With support for Alembic animations and Lua scripting, Octane has now a very solid foundation for animation rendering in place, allowing for some very cool stuff (yet to be announced) that can be done fully in real-time on a bunch of GPUs (inspired and fueled by earlier Brigade experiments). In 2014, Octane will blow minds like never before.

Friday, December 20, 2013

Lamborghini Test Drive (OctaneRender animation)

Recently I was blown away by a video posted by niuq.cam on the Octane forum, called "Lamborghini Test Drive" as a tribute to celebrate the 50h anniversary of Lamborghini. The realism you can achieve with Octane is just batshit crazy as evidenced by the video.

Try to spot the 7 differences with reality:


Some specs:

- the scene is 100% 3D, all rendered with Octane
- rendered on 4x GTX Titan
- render resolution 1280 x 538 Panavision format (2,39:1)
- average rendertime per frame: from 1 minute for the large shots with the cars to 15 minute for the helmet shots by night
- over 5.000.000 triangles for both cars
- instances for the landscape.

Tuesday, December 10, 2013

Real-time path tracing with OctaneRender 1.5

Just want to share a couple of real-time rendered videos made with the upcoming OctaneRender 1.5. The scene used in the videos is the same one that was used for the Brigade 3 launch videos. The striking thing about Octane is that you can navigate through this scene in real-time while having an instant final quality preview image. It converges in just a few seconds to a noise free image, even with camera motion blur enabled. It's both baffling and extremely fun. 

The scene geometry contains 3.4 million triangles without the Lamborghini model, and 7.4 million triangles with (the Lamborghini alone has over 4 million triangles). All videos below were rendered in real-time on 4 GTX 680 GPUs. Because of the 1080p video capture, the framerate you see in the videos is less than half the framerate you get in real life, it's incredibly smooth. 



There are a bunch more real-time rendered videos and screenshots of the upcoming OctaneRender 1.5 in this thread on the Octane forum (e.g. on page 7).

Monday, November 11, 2013

Shiny Toy pathmarcher on Shadertoy

This looks incredible, raymarching with GI, glossy road, glossy car: https://www.shadertoy.com/view/ldsGWB

The future of real-time graphics is noisy!

New GPU path tracer announced

Jacco Bikker just announced a new GPU based path tracer on the ompf forum. There's also a demo version available that you can grab from this post

Tuesday, October 22, 2013

Le Brigade nouveau est arrivé!

Time for an update on Brigade 3 and what we've been working on: until now, we have mostly shown scenes with limited materials, i.e. either perfectly diffuse or perfectly specular surfaces. The reason we didn't show any glossy (blurry) reflections so far, is because these generate a lot of extra noise and fireflies (overbright pixels) and because the glossy material from Brigade 2 was far from perfect. Over the past months, we have reworked the material system from Brigade and replaced it with the one from OctaneRender, which contains an extraordinary fast converging and high quality glossy material. The sky system was also replaced with a custom physical sky where sky and sun color vary with the sun position.And there's a bunch of brand new custom post effects, tone mapping filters and real camera effects like fish eye lens distortion (without the need for image warping).

We've had a lot of trouble finding a good way to present the face melting awesomeness that is Brigade 3 in video form and we've tried both youtube and Vimeo at different upload resolutions and samplecounts (samples per pixel). Suffice to say that both sites have ultra shitty video compression, turning all our videos in a blocky mess (although Vimeo is still much better than YT). We also decided to go nuts on glossy materials and fresnel on every surface in this scene, which makes everything look much a lot more realistic (in particular fresnel, which causes surfaces to look more or less reflective depending on the viewing angle), but the downside of this extra realism is a lot of extra noise.

So feast your eyes on the first videos of Brigade 3 (1280x720 render resolution):

Vimeo video (less video compression artefacts): https://vimeo.com/77192334

Youtube vids: http://www.youtube.com/watch?v=aKqxonOrl4Q


Another one using an Xbox controller:

The scene in the video is the very reason why I started this blog five years ago and is depicted in one of my very first blog posts from 2008 (see http://raytracey.blogspot.co.nz/2008/08/ruby-demo.html). The scene was created by Big Lazy Robot to be used in a real-time tech demo for ATI's Radeon HD 4870 GPU. Back then, the scene used baked lightmaps rendered with V-Ray for the diffuse lighting and an approximate real-time ray tracing technique for all reflective surfaces like cars and building windows. Today, more than five years later, we can render the same scene noise free using brute force path tracing on the GPU in less than half a second and we can navigate through the entire scene at 30 fps with a bit of noise (mostly apparent in shadowy areas). When I started this blog my dream was to be able to render that specific scene fully in real-time in photoreal quality and I'm really glad I've come very close to that goal. 

UPDATE: Screenshot bonanza! No less than 32 screenshots, each of them rendered for 0.5 - 1 second. The problem with Brigade 3 is that it's so much fun mucking around with the lighting, the time of day, depth of field and field of view with lens distortion. Moreover, everything looks so photoreal that it's extremely hard to stop playing and taking screenshots. It feels like you're holding a camcorder.


We plan to show more videos of Brigade 3 soon, so stay tuned... 

Update: I've uploaded the direct feed version of the second video to MEGA (a New Zealand based cloud storage service, completely anonymous, fast, no registration required and free, just excellent :). You can grab the file here: brigade3_purely_random_osumness (it's 2.40 GB)

Update 2: The direct feed version of the first video can be downloade here: brigade3_launch_vid_HD.avi (2.90 GB). This video has a higher samplecount per pixel per frame (and thus less noise and lower framerate).