Quasi-random, more or less unbiased blog about real-time photorealistic GPU rendering
Thursday, December 10, 2009
SVO path tracer WIP
V-Ray RT,
mental images iray,
Nvidia Optix,
Caustic's Brazil RT,
David Bucciarelli's SmallptGPU OpenCL path tracer (http://davibu.interfree.it/opencl/smallptgpu/smallptGPU.html),
albertoven's Raydiant (albertoven.com),
mxadd CUDA path tracer (http://mxadd.org/CenterFrame.php?wherejump=projects),
javor's CUDA path tracer (http://javor.tech.officelive.com/tmp.aspx))
and the promise of infinite geometry by using voxel rendering, I decided to take the best of both worlds and write an SVO path tracer as a hobby project ;-). My goal is to make a very simple demo showing a Cornell box with a SVO voxelized Stanford Dragon in it, interactively path traced (brute force calculated unbiased indirect lighting) with CUDA (or OptiX, if it's easy enough to implement SVO raycasting). The whole scene must fit in video memory, no streaming (yet), because the path tracing algorithm would badly choke.
I also want to do a comparison between a polygon version and an SVO version of the aforementioned scene, hoping to demonstrate that path tracing voxels is much faster than path tracing polygons, due to the much simpler ray-voxel intersection calculation and automagic multiresolution benefits (see Cyril Crassin's "Beyond triangles: gigavoxels effects in videogames").
I'm still in the research phase, reading articles on (volume) path tracing and SVO (Cyril Crassin's Gigavoxels paper is extremely informative and I think you could just extend the raycasting to a path tracing approach). More to come soon.
Monday, November 30, 2009
Square Enix' cloud gaming
Thursday, November 26, 2009
Soon...
I hope so goddammit ;-). Anything on the new Ruby, Fusion Render Cloud or that mysterious anime game would be more than fine.
In the meantime, there's an excellent article about cloud gaming in the November issue of Game Developer, written by Jake Cannell:
http://gamedeveloper.texterity.com/gamedeveloper/200911?pg=11#pg11 (you can read the last three pages without subscription)
Thursday, October 22, 2009
Nvidia's realtime ray tracing in the cloud: RealityServer and iray
mental images' RealityServer has now the option to render on a GPU cluster with CUDA based iray.
A very impressive interactive ray tracing demo, first shown publically at Nvidia's GPU Tech Conference: http://www.megaupload.com/?d=RCYDVG6S (H.264, 180 MB)
A video of iray integrated in Google SketchUp: http://www.youtube.com/watch?v=i6LBhws8l-A
RealityServer is sold in different configurations consisting of 8 up to 100+ Tesla GPU's. The biggest cluster configurations are targeted towards rendering of online virtual worlds and online entertainment, and can serve multiple concurrent users per GPU.
more info:
http://www.hpcwire.com/features/NVIDIA-Pitches-GPU-Computing-in-the-Cloud-65217572.html
http://brightsideofnews.com/news/2009/10/20/nvidia-launches-gpu-based-realityserver-30.aspx?pageid=0
Nvidia also showed a demo of an interactively raytraced Bugatti with global illumination running on OptiX at GTC:
video: http://www.megaupload.com/?d=YAOSYWK1 (H.264, 82 MB)
Wednesday, October 14, 2009
New CryENGINE3 trailer
in HD: http://www.youtube.com/watch?v=7tPfM1QnPlo&hd=1
Again the trailer explicitely mentions "Integrated Voxel Objects". I don't think it refers to the voxel sculpting tool for making things like caves in Sandbox Editor 2. I believe it is a reference to the SVO tech that I discussed in the previous post, because it is shown in the trailer among other features that are specific to CryENGINE3: procedural content, multi-core support, streaming infinite worlds, (advanced AI), deferred lighting, realtime global illumination, cryengine3 Live create (whatever that is).
Tuesday, September 29, 2009
Sparse Voxel Octree in CryEngine 3
One thing that made me smile: "GPU rasterized version of SVOs & its realtime content creation only available in CryENGINE 3" is listed as a con. Nice joke Crytek!
Other slides list the pro’s and cons of point based rendering, raytracing and rasterization. http://www.4gamer.net/games/092/G009273/20090903001/
If Crytek continues innovating at this pace, there’s a good chance that CryEngine3 will become the Unreal Engine 3 of the next generation and will attract many licensees.
Tuesday, September 22, 2009
OTOY tech will power movie/game in 2012
http://www.brightsideofnews.com/news/2009/9/21/worlds-first-ray-traced-pc-game-and-movie-to-arrive-in-2012.aspx
Thursday, September 17, 2009
Friday, September 11, 2009
New Ruby/OTOY demo for R800
So far only a few pictures provided by neliz from Beyond3d.com
and a short video, commented by Jules Urbach:
http://pc.watch.impress.co.jp/video/pcw/docs/315/056/html/17.flv.html
Hopefully more to come soon...
Tuesday, August 18, 2009
The Future of Game Graphics according to Crytek
Two interesting slides:
According to Cevat Yerli, their next generation engine will use a mix of ray tracing, rasterization, point based rendering and SVO: (Google translation from http://www.golem.de/0908/69105.html)
Yerli has also talked about the technique a year ago, other graphics programmers like John Carmack or Jon Olick are also researching it. According to Yerli Sparse Voxel Octrees will form the base for the next version of its Cryengine - but will be ready only in a few years .From http://www.gamasutra.com/php-bin/news_index.php?story=24865
He then focused on the actual technical innovations that he feels will make a difference in graphics. For example, tech like point-based rendering is potentially faster than triangle-based rendering at certain higher qualities, and works well with levels of detail.
On the other hand point-based rendering might define a certain super-high polygon look for game, Yerli said. However: "There's a lot of games today in the Top 10 which don't need that", he conceded, and content creation tools are almost exclusively based around triangles right now.
He also noted ray-tracing as a possible rendering method to move towards, and particularly recommended rasterization and sparse voxel octrees for rendering. Such principles will form "the core" of future technology for Crytek's next engine, Yerli said, and the goal is to "render the entire world" with the voxel data structure.
Concluding, Yerli suggested that, after 2013, there are opportunities with new APIs and hardware platforms to "mix and match" between multiple rendering models, with "a Renaissance of graphics programming", and visual fidelity on a par with movies such as Shrek and Ice Age rendered in real time.
Friday, August 14, 2009
Carmack keynote at QuakeCon 09
Carmack considers cloud computing an interesting way forward for different game types. He thinks that a reduction of the latency to 50ms is achievable. However, he believes that the current internet infrastructure still needs a lot of work before fast-paced "twitchy" shooters like Quake are possible.
Two live blogging reports:
Oh boy, someone's asking about Cloud Computing.http://www.vg247.com/2009/08/13/quakecon-press-conference-liveblog-today-at-930pm-bst/
#
Carmack says it's "wonderful."
#
Talking about how parallel processing is where it's at now.
#
But there are physical limits.
#
Especially in terms of how much power computers/consoles are drawing.
#
Cloud Computing prevents server-side cheating.
#
Still some serious disadvantages when it comes to fast-twitch games like Quake, though.
#
With 50 millisecond lag, though, anything's possible.
#
Some games even have that much lag internally.
#
Carmack thinks Cloud Computing could be a significant force in a few years -- perhaps even a decade.
Question about cloud computing and onlivehttp://kotaku.com/5336589/the-john-carmack-keynote-liveblogging-quakecon
*
Carmack says cloud computing is wonderful
*
But carmack says about cloud computing for gaming that you start winding up coming up against power limits
*
So says common computing resources may be helpful. Because the power bricks on something like a 360 is showing a looming power problem
*
Latency is the killer, carmack says
*
Says the sims would work with a cloud setup
*
But thinks twitch gaming will be the last kind of game that could work
*
Says that cloud computing would limit cheating
*
Thinks you could get games down to 50 millisecond lags as they're streamed via cloud computing
*
Wouldn't be shocked if in ten years cloud computing is a significant paradigm for some kinds of games
On onlive type services..latency the issue, but a lot more classes of games than people think, could be feasible, example being "The Sims", twitch games like Quake would be the hardest..upside is client side cheating vanishes. Key will be optimizing networks stacks for reasonable latency. Definitely thinks its not a crazy idea and has very interesting potential.
http://forum.beyond3d.com/showthread.php?t=54817
Sunday, August 9, 2009
The race for real-time ray tracing (Siggraph 2009)
Nvidia: OptiX, mental images (RealityServer, iray)
Intel: LRB
AMD: nothing AMD-specific announced yet
Caustic Graphics: CausticRT, BrazilRT, integration in 3DStudioMax Design 2010, LightWork Design, Robert McNeel & Associates, Realtime Technology AG (RTT AG), Right Hemisphere and Splutterfish
Then there was also this extremely impressive demonstration of V-Ray RT for GPU's, which caught many by surprise:
video: https://www.youtube.com/watch?v=DJLCpS107jg
and http://www.spot3d.com/vrayrt/gpu20090725.mov
V-Ray RT rendering on a GTX 285 using CUDA (will be ported to OpenCL): rendering a Cornell box with 5 bounces of physically correct global illumination at 40 fps, and a 800k polygon Collosseum with 5 GI bounces at around 4 fps (with progressive rendering). Since it will be ported to OpenCL, GPU's from AMD and Intel will be able to run it as well.
The Chaos Group and mental images presentations indicate that rendering is going to move from the CPU to the GPU very soon and will become increasingly realtime.
Caustic Graphics would like to see their cards end up in a next-gen console as they target gaming as well. During their presentation at the Autodesk booth, they also mentioned cloud gaming as an option: put a lot of Caustic accelerators in a server and create a fully raytraced game rendered server side. Chaos Group could do the same in fact: they could use their V-Ray RT GPU tech in a cloud rendering environment and make a photorealistic game doing realtime raytracing on a bunch of GPUs (V-Ray RT GPU supports multi-GPU tech and distributed rendering), with fully accurate and dynamic GI rendered on the fly. And if they don't do it, I'm sure mental images will with iray and Realityserver.
There were also some interesting presentations about the future of realtime graphics and alternative rendering pipelines, which all suggested a bigger focus on ray tracing and REYES and indicated that pure rasterization will become less important in the not so distant future.
On the game development front, Tim Sweeney (and of course Carmack with the SVO stuff) is exploring ray tracing/ray casting for his next generation engines: http://news.cnet.com/8301-13512_3-10306215-23.html
and
http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
According to Sweeney, next generation rendering pipelines could include a mix of ray tracing, REYES, voxel raycasting and other volume rendering techniques, all implemented in a GPGPU language: REYES for characters and other dynamic objects, voxels for the static environment and foliage, raytracing for reflection and refraction and maybe some kind of global illumination.
Friday, August 7, 2009
John Carmack talking about cloud computing...
http://www.youtube.com/watch?v=N_Ge-35Ld70
This 10-minute video shows the last part of a three-part video interview. Carmack talks a little bit about the sparse voxel octree raycasting stuff that he's excited to do research into. Then he wanders off to cloud computing and his excitement goes visibly through the roof.
From http://www.eurogamer.net/articles/digitalfoundry-carmack-next-gen-blog-entry :
"The big question is, are we going to be able to do a ray-casting primitive for a lot of things?" he ponders. "Certainly we'll still be doing a lot of conventional stuff like animated characters and things like that very likely will be drawn not incredibly differently from how they're drawn now. Hopefully we'll be able to use some form of sparse voxel octree representation cast stuff for some of the things in the world that are gonna be rigid-bodied... maybe we'll have deformations on things like that. But that's a research project I'm excited to get back to in the relatively near future. We can prototype that stuff now on current hardware and if we're thinking that... this type of thing will be ten times faster on the hardware that ends up shipping, we'll be able to learn a lot from that."
However, while he predicts that the leaps in cutting edge console technology are set to continue (certainly there is no hint from him that Microsoft or Sony will follow a Wii-style strategy of simply adding minor or incremental upgrades to their existing hardware), we are swiftly reaching the point where platform holders will be unable to win their battles against the laws of physics.
"We talk about these absurd things like how many teraflops of processing and memory that are going into our game machines," Carmack says, speculating off-hand that the next gen consoles will have at least 2GB of internal RAM. "It's great and there's going to be at least another generation like that, although interestingly we are coasting towards some fundamental physical limits on things. We've already hit the megahertz wall and eventually there's going to be a power density wall from which you won't get more processing out there..."
That being the case, he speculates that the game-makers could move into different directions to provide new game experiences and at that point, the almost mythical cloud computing concept could make an impact.
"There'll be questions of whether we shift to a cloud computing infrastructure... lots of interesting questions about whether you have the computing power in your living room versus somewhere else," he says, noting that while latency is a fundamental issue, the sheer scope of storage available online opens up intriguing possibilities. "Certainly the easier aspect of that is 'net as storage' where it's all digital distribution and you could wind up doing an idTech 5-like thing... and blow it up to World of Warcraft size so you need a hundred petabytes of storage in your central game system. We can do that now! It's not an absurd thing to talk about. Games are already in the tens of millions of dollars in terms of budget size and that's probably going to continue to climb there. The idea of putting millions of dollars into higher-sized storage... it's not unreasonable to at least consider."
Sunday, August 2, 2009
OTOY at Siggraph 2009 on Aug 3
SIGGRAPH 2009 Panel Explores Eye-Definition Computing and the Future of Digital Actors:
http://finance.yahoo.com/news/SIGGRAPH-2009-Panel-Explores-bw-367156942.html?x=0&.v=1
Interview with David Perry of Gaikai on some of the technical details (scaling, latency) of his cloud gaming service:
http://www.gamesindustry.biz/articles/david-perry-part-two
I've mentioned the blog EnterTheSingularity in a previous post, and the author Jake Cannell keeps writing very interesting and elaborate blogposts on voxels and cloud gaming such as:
http://enterthesingularity.blogspot.com/2009/07/voxel-tracing.html
http://enterthesingularity.blogspot.com/2009/07/next-generation-of-gaming.html
Friday, July 10, 2009
GigaVoxels at Siggraph
Beyond Triangles: Gigavoxels Effects in Video Games
Some screens of raycasted voxel scenes with the GigaVoxel technique here
I also found an interesting blog (via Timothy Farrar's blog) called EnterTheSingularity, which features a very interesting and in-depth post about the "perceived" latency of cloud gaming services:
http://enterthesingularity.blogspot.com/2009_04_01_archive.html
The author (Jake Cannell, programmer on Mercenaries 2) has been thinking through the idea of cloud games quite well.
Thursday, July 2, 2009
Gaikai, another cloud gaming service
http://games.venturebeat.com/2009/07/01/gaikai-demos-how-to-serve-games-over-the-internet/
Perry said the games can run on less than a megabit a second of Internet bandwidth. Most DSL connections offer around three megabits a second, coming downstream into the home. In the demo, Perry said, the server was about 800 miles away, which results in a round trip (ping rate) of 21 milliseconds. That’s a split second and not really noticeable. For a server much closer, the ping rate is about 10 milliseconds. While both OnLive and Otoy used custom servers with graphics chips, Perry said his service can run across low-end custom servers.
Perry contends the ping rate is fast enough to play most games. He showed racing games such as Mario 64 and other games such as World of Warcraft and Eve Online, all playing without downloaded software. He also showed how he could use Adobe Photoshop across the server.
Wednesday, July 1, 2009
Naughty Dog's Uncharted 2 does cloud computing
Excerpt from the live chat:
2:37 | [Comment From Pound_lb003] What crazy technical terms can you throw at us, that the ND Engine 2.0 boasts? |
2:38 | [Comment From wsowen02] How varied are the environments this time? |
2:40 | [Comment From Luke] You told us the snow is going to be the best snow we ever saw. We got some footage of that snow in trailers, but can you tell us more, of how this snow is going to work? By the way, you're an awesome company Naughty Dog. |
2:40 | Evan Wells: Screen Space Ambient Occlusion, deferred rendering, cloud computing, paralax mapping, high dynamic range tonemapping, per object motion blur, cascade shadows, sub surface scattering simulation... |
Source: http://blog.us.playstation.com/2009/06/30/uncharted-2-live-chat-with-naughty-dog
update: Interesting info about PS Cloud:
http://playstationlifestyle.net/2009/06/29/sony-drumming-up-a-digital-storm-with-playstationcloudcom/
Ever since the trademark filing of PS Cloud, speculation has been buzzing. The trademark covers “entertainment services, namely, providing an on-line video game that users may access through the internet”
We have found more evidence of PlayStation Cloud being accessible “through the internet”…
While doing some research, I performed a who is lookup of the domain playstationcloud.com. The results of my query showed that SCEA San Diego Studios is the owner of the domain.
SCEA San Diego recently mentioned while speaking to Gamasutra that they would be “exclusively” working on PSN game development. This goes against all speculation so far pegging PlayStation Cloud as a social networking service. Of course Sony San Diego could very well be developing a service or application, for the PSN with PlayStation Cloud.
Whatever PS Cloud is, it seems as though it’s going to be coming from Sony’s San Diego Studio.
Tuesday, June 30, 2009
Crysis on a cell phone
http://www.techcrunch.com/2009/06/22/exclusive-otoy-goes-mobile-turns-your-cell-phone-into-a-powerful-gaming-rig/
A portable game platform with dedicated game controls such as the PSP connected via WiFi to a 3G phone seems more comfortable and mobile than the setup in the video. In my opinion this is just a proof of concept and I think that the real crowd pleaser will be the photorealistic virtual world with LightStaged characters.
Tuesday, June 16, 2009
2 New OTOY video's
http://www.techcrunch.com/2009/06/16/videos-otoy-in-action-you-have-to-see-this/
The first video shows a guy playing Left 4 Dead and Crysis on his HD TV, hooked up to his laptop, which is connected to the OTOY server through broadband access. He can switch instantly between both games while playing, very impressive. According to the TechCrunch article, EA is partnering with OTOY.
The second video shows GTA4 being played in a web browser while running in the cloud. According to the tester, there is some lag, but it's very playable. Personally, I think GTA4 is not the best game to show off server side rendering because it runs on a terribly unoptimized, buggy and laggy engine. There's no way you can tell if the lag is due to the crappy engine or to the connection. Unfortunately, there's no info about the geographical distance between the player and the cloud. What it does show is that a GTA game with LightStage quality characters and environments could definitely be possible and playable when rendered in the cloud. In fact, I asked Jules this very question yesterday and he confirmed to me that it was indeed possible.
update: Many people cannot believe how OTOY can render so many instances per single GPU. I checked my notes and as Jules explained it to me, he can run 10 instances of a high-end game (like Crysis) and up to 100 instances of a low-end game per GPU. The GPU has a lot of "idle" and unused resources in between the rendering of frames for the same instance. OTOY efficiently uses this idle time to render extra instances. The games shown in the videos (Crysis, Left 4 Dead, GTA IV) are of course traditionally rendered. When using voxel ray tracing, OTOY scales even better.
OTOY can switch between rasterizing and voxel raycasting, because it uses a point cloud as input. Depending on the complexity, one is faster than the other. The scorpion demo for example (the Bug Snuff movie), was first rendered as voxels, but rasterizing it was faster. The Ruby demo from last year was completely raytraced (the voxel rendering is not limited to raycasting, but uses shadow rays and reflection rays as well, so it could be considered as true raytracing).
A quantum leap of faith
Just a couple of weeks ago, I was still wondering what the technical specifications of the next generation of consoles would be like. But after yesterday… frankly I don’t give a damn anymore. The promise of OTOY and server side rendering is even bigger than I initially thought. In fact it’s huge and that’s probably an understatement. In one interview, Jules said that it “is comparable to other major evolutions of film: sound, color, cinemascope, 70mm, THX, stereoscopic 3D, IMAX, and the like” I think it’s even bigger than that, and it has the potential to shake up and “transform” the entire video game industry.
Server side rendering opens up possibilities for game developers that are really hard to wrap your head around. Every game developer has learned to work inside the limitations of the hardware ( memory, polygon and texture budgets, limited number of lights, number of dynamic objects, scene size and so on). These budgets double in size only every 12 to 18 months. Now imagine that artists and level designers could make use of unlimited computational resources and no longer have to worry about technical budgets. They can make the scene as big as they want, with extreme detail (procedurally generated at the finest level) and with as much lighting information and texture layers as they desire. That’s exactly what server side rendering combined with OTOY’s voxel ray tracing might offer. It requires a shift in the minds of game developers and game publishers that could be considered a quantum leap of faith. The only limitation is their imagination (besides time and money of course), and anything that you see in offline rendered CG, could be possible in real-time. Jules is also working on tools to facilitate the creation of 3D environments and to keep development budgets reasonable. One of those tools is a portable LightStage, which is (as far as I understood) a cut down version of the normal LightStage that can be mounted onto a driving car and that can capture whole streets and cities and convert them into a 3D point cloud. It’s much better than LIDAR, because it captures lighting and texture information as well. Extremely cool if it works.
Because the server keeps the whole game scene in memory and because of the way that the voxel ray tracing works, OTOY and the render cloud can scale very easily to tens of thousands of users. Depending on the resolution, he can run 10 to 100 instances of a game scene on one GPU. And you can interconnect an unlimited number of GPU’s.
The best thing about the server side rendering idea is that every one is a winner: IHV’s, ISV’s, game publishers and most importantly the gamers themselves (for a number of reasons which I talked about in one of my previous posts).
In conclusion, I guess every PC gamer has dreamt at some point about a monster PC with terabytes of RAM and thousands of GPU’s working together, with a million unified shaders combined. Until recently, no one in their right mind would make such a monster, because economically it makes no sense to spend a huge load of cash on the development of a game that would make full use of such enormous horse power and could only be played by one person at a time. But with the rapid spreading of broadband internet access, suddenly a whole lot of people are able to play on that monster PC and it becomes economically viable to make such an extremely high quality game. I think OTOY will be the first to achieve this goal. Following the increasing trend of office applications being run in the cloud, server side rendering is going to be the next step in the evolution of the video game industry and it will make “client-side hardware” look like an outdated concept. Jules told me he thinks that in the future, the best looking games will be rendered server side and that there’s no way that expensive local hardware (on the client side) will be able to compete. I for one can’t wait to see what OTOY will bring in the near future.
Saturday, June 6, 2009
Has it been 1 year already?
One full year, I cannot believe it. AMD has released every Ruby demo to the public well within a year after the introduction of the hardware. It began with Ruby Double Cross on Radeon X800, then Ruby Dangerous Curves on X850, followed by Ruby The Assassin on X1800, and finally Ruby WhiteOut on HD 2800. So it made perfect sense that the Voxelized Ruby would come out within a few months after the unveiling of the Radeon 4870. Even Dave Baumann said on the Beyond3d forum that the demo would be released for the public to play with.
So what went wrong? Did ATI decide to hold back the demo or was it Jules Urbach? I think the intial plan was to release the demo at a certain point, but the voxel technology was not finished and took longer to develop than expected. To enjoy the demo at a reasonable framerate, it had to be run on two 4870 cards or on the dual 4870X2, so only the very high end of the consumer market would be able to run it. The previous Ruby demo's were made by RhinoFX, and this was the first time that OTOY made a Ruby demo. Either way, if AMD is making another Ruby demo (with or without OTOY, but I prefer with), it has to look better than the last one, and they better release it within a reasonable amount of time.
Something else that crossed my mind: OTOY is now being used to create a virtual world community (Liveplace/Cityspace) and I think OTOY's technology would be a perfect match for Playstation Home. Virtual world games are much more tolerant to lag than fast-paced shooters or racers, and I think that even a lag of 500 milliseconds would be doable. Imagine you're playing a game on your PS3. Once you're done playing, you automatically end up in Home, being rendered in the cloud. Sony has trademarked PS Cloud, ( http://www.edge-online.com/news/sony-trademarks-ps-cloud ) and I wouldn't be surprised if Sony moved the rendering for PS Home from the client to the server side sooner or later.
Friday, June 5, 2009
A possible faster-than-light solution for the latency problem?
I have totally embraced the cloud computing idea. I hope OTOY and OnLive can pull it off and create a paradigm shift from client to cloud rendering. The main problem seems to be lag. Apart from the extra lag introduced by encoding/decoding the video stream at the server/client side respectively, which should not be greater than a couple of milliseconds, there is lag due to the time that the input/video signal needs to travel the distance between client and server, which can amount to several tens to hundreds of milliseconds. This is due to the fact that information cannot travel faster than the speed of light (via photons or electromagnetic waves). Quantum physicists have discovered ways to do quantum computing and quantum encryption at 10.000 times the speed of light, but they all agreed that it was not possible to send information faster than lightspeed, because they could not control the contents of quantum entangled photons. But very recently, Graeme Smith, a researcher at IBM has proposed a way to "transmit large amounts of quantum information" described in the following paper, published in Feb, 2009:
http://domino.research.ibm.com/comm/research_people.nsf/pages/graemesm.Main.html/$FILE/NonPrivate.pdf
http://tech.slashdot.org/article.pl?sid=08/08/06/0043220
If his theory holds true, IBM or someone else could make a computer peripheral based on quantum technology (sort of like an Apple Airport) that can communicate large amounts of data instantaneously! Distance between client and cloud would no longer be a problem and transmission lag would be non-existent! It would make playing server side rendered games an almost lag-free experience and the ultimate alternative for costly, power hungry consoles.
Saturday, May 30, 2009
Ruby 2009 and the future of games
This presentation from Jules Urbach shows what the next Ruby demo could look like:
http://developer.amd.com/gpu_assets/ATIGPGPUComputingFusionRenderCloudGDC09CompatibilityMode.pdf
http://business.outlookindia.com/newolb/article.aspx?101987
However, OTOY CEO, Jules Urbach, is optimistic: "We can put any game on the cloud. Eventually, we can virtualise everything, including consoles, hardware and even Grand Theft Auto games. In three to five years, consoles will look different. Perhaps, consoles would be based on data on a cloud."The idea of server side rendering or cloud computing of games as proposed by OTOY, OnLive and others is really starting to grow on me. At first it didn't really seem that interesting or practical, but the more I think about it, the more I can see its potential. Some random thoughts:
Meanwhile, consumers across the US will test the prototype in the latter half of this year. Marketing plans, purchase points and so on would be developed later, depending on their verdict.
- No more console cycles: Gamers don't need to upgrade their hardware every 4-6 years and game developers don't have to frustratingly wait for the next round of consoles to be able to use new features such as DX11 compute shaders or tesselation
- Game developers can use the latest and greatest GPU's and algorithms instantly, without worrying about developing for a "common lowest denominator" (like Valve did with Half-life 2)
- Piracy that killed PC game development would be much harder (as it is now with Steam)
- Performance problems would be a thing of the past: just add a few hundred extra GPU's to the cloud and the framerate is buttersmooth again. There's no limit to the complexity and visual fidelity of a game: all depends on the willingness of the developer to invest in the server hardware
- no more costly multi-platform ports: the cloud harware is the only platform that needs to be targetted
- the cloud can be upgraded whenever newer CPU/GPU hardware becomes available, memory and SSD's can be added at any time
- It would also mean the end of the chicken and egg problem: small console install bases during the launch window scare developers away, but with server side rendering, everyone with a reasonably fast internet connection and a screen is a potential customer
- Most importantly: no more console wars (at least not of the hardware kind), no need to steal exclusives like Microsoft loves doing,
- Games will no longer be tied to one specific platform and will as a result be reviewed by press and gamers with less bias and have a better chance to sell
- no more hardware problems, red rings, repairs and warranty refunds on the client side
- very cheap, almost free "microconsoles", set top boxes, ... instead of $600 launch price which of course stiffles growth of install base
- the install base for cloud games already exists and is huge: all owners of Xbox 360, PS3, PC, Mac, set-top box with broadband access
- Games don't have to be bought in stores or downloaded and installed any more, but can be played from the moment the game is running in the cloud
- bigger potential for episodic game content (like Half-life 2 episodes on Steam)
- no more DVD royalties from developers to console makers (multiple DVD's for Rage on 360)
- no more limit on the size of the game content (if you want to make a 500GB game and the server has enough memory, go ahead)
- no more "console makers" in the classical sense: MS, Sony, Nintendo will not be making high-tech fully featured consoles anymore, but just simple set-top box like consoles or none at all. Their main focus will become online services instead of hardware. Of course Sony will still sell boatloads of TV's and Blu-ray players. Every electronics manufacturer that can make set-top boxes will be able to make a "next-gen" console.
- GPU makers nVidia and ATI will sell more GPUs to businesses (the render cloud owners, i.e. game publishers) and GPU sales to consumers will drastically decrease. This trend is already taking place: with the rapid death of the PC games market, less GPU's are sold to consumers directly and more to console manufacturers. On top of that, they will not be stuck with selling the same GPU for 6 years in a row.
In short, there are much less restrictions on game development if the cloud server is big enough.
The question that everyone seems to ask with respect to cloud rendering: what about lag? OTOY and OnLive seem to have the answer: new and vastly improved compression algorithms. Latency is reduced to a few milliseconds, not perceivable by the human eye. So I think that problem is solved. An unsolved question is how to make money of cloud games.
Obviously, if cloud rendering becomes mainstream, it will completely transform the current game console landscape. Every game publisher will be able to run its own game cloud (think about Valve and Steam but on a global scale) and offer their own online service with an online store, without having to pay royalties to console makers (just as it was in the Good Ol' PC days).
Saturday, April 4, 2009
More OTOY video's
http://neuronspark.com/videos/rendering-in-the-cloud/
An early concept video of LivePlace from March 2007, commented by Jules Urbach. Doesn't use voxel raycasting. Shows scenes created by JJ Palomo from 3DBlasphemy, currently at Big Lazy Robot VFX, and working together with OTOY:
http://metanomics.net/otoy
Thursday, April 2, 2009
Voxels, id tech 6, stuff
It's the übersexy Imrod! modelled in Z-Brush by Dmitry Parkin
http://www.youtube.com/watch?v=VpEpAFGplnI
There's a Siggraph presentation about SVO from Jon Olick as well
http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf
Gigavoxels: high quality sparse voxel octree raycasting from Cyril Crassin:
http://www.youtube.com/watch?v=HScYuRhgEJw
And an awesome blog about voxel rendering:
http://voxels.blogspot.com
Ruby, Ruby, Rubaaaaaaaaaaayyyyy
So, almost 10 months have passed since the first unveiling of the Ruby/OTOY/Cinema 2.0/whatever demo. AMD still hasn't released the demo. For shame.
Since my last post in October, their was a little media fart about OTOY's server side rendering around CES09 in January. They're gonna build a 1 Petaflop supercomputer, built out of ATI cards.
Recently, OTOY showed up at GDC as well. They showed a completely lifelike CG model of Ruby's head, made with LightStage. Only the upper half for now. So damn sexy...
linky for the pic: http://www.pcper.com/comments.php?nid=6967 Skip to 50:00 and turn off the volume, cos the show host doesn't know crap about what he's talking about.