Intel Researchers Consider Ray-Tracing for Mobile Devices 120
An anonymous reader points out an Intel blog discussing the feasibility of Ray-Tracing on mobile hardware. The required processing power is reduced enough by the lower resolution on these devices that they could realistically run Ray-Traced games. We've discussed the basics of Ray-Tracing in the past. Quoting:
"Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace. As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before. We believe that with Ray-Tracing, developers will have an opportunity to deliver more content in less time, because when you render things in a physically correct environment, you can achieve high levels of quality very quickly, and with an engine that is scalable from the Ultra-Mobile to the Ultra-Powerful, Ray-Tracing may become a very popular technology in the upcoming years."
Re:Inverse Moore's Law (Score:2, Informative)
Summary is misleading (Score:3, Informative)
Re:prog10 (Score:5, Informative)
Re:Inverse Moore's Law (Score:5, Informative)
Current rasterization approaches use a lot of approximations, it's true, but they can get away with that because in interactive graphics, most things don't need to look perfect. It's true that there's been a lot of cool work done lately with interactive ray tracing, but for anything other than very simple renderings (mostly-static scenes with no global illumination and hard shadows), ray tracers *also* rely on a bunch of approximations. They have to: getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)
By the way, the claim that with a "physically correct environment, you can achieve high levels of quality very quickly" doesn't really make much sense. What's a "physically correct environment" and what is it about rasterization that can't render one? How are we defining "high levels of quality" here? And "very quickly" is just not something that applies much to ray tracers at the moment, especially in the company of "physically correct".
Real time raytracing with POV-Ray (Score:4, Informative)
Raytracing is not the holy grail of graphics (Score:5, Informative)
Re:Summary is misleading (Score:3, Informative)
Doubling the number of transistors on an LCD does not double the resolution (as you pointed out), it only multiplies each dimension by the square root of 2. Doubling the number of transistors on a CRT does nothing (well, maybe it gives you a more impressive OSD). But even limiting it to LCDs, it does not hold up. Display resolution does not follow Moore's Law. If it did, then just three years ago, a 30" LCD would be 1280x800, or that the current MacBook would be around 1900x1200.
The reason for this is not that Moore's Law doesn't apply to LCDs, it probably does. What's happening is that instead of using that technology increase solely to make ever higher resolution displays, it's used to make ever cheaper and higher quality displays at the same, or marginally improved, resolutions.
The thing you can directly measure with LCDs with regards to Moore's Law is dot pitch. Every 18 months or so (let's say 2 years as that's the outside figure), dot pitch would increase by the square root of 2. That means that the display elements in your OS would shrink over time, and something that was 1" square in 2000 would now be 0.25" square. That's just since 2000. Go 8 years back again, and displays would have to be such that those 1" square icons would have to be 4" across and 4" tall!
What you're noticing is that high-end games seem to match high-end displays at similar frame rates. This is not because display technology is keeping up with the silicon that drives your games. It's because game companies make use of every available cpu and gpu cycle until a certain approximate frame rate is reached.
Re:Inverse Moore's Law (Score:2, Informative)
Pixar is not the holy grail of graphics (Score:2, Informative)
Re:Summary is misleading (Score:2, Informative)
http://en.wikipedia.org/wiki/Eye#Acuity [wikipedia.org]
http://www.dansdata.com/gz029.htm [dansdata.com]
Piggy-backing on Dan's hand waving, 300 dpi at 1 foot is a decent rule of thumb, and waving my own hands, 1 foot is a reasonable minimum distance for a handheld device(I don't imagine most people holding something any closer than this for long periods of time, opinions may vary). So for a screen that is 5 x 10 inches, the benefits for going past 1500 X 3000 pixels rapidly diminish, especially for video/animation. For smaller screens, the pixel count is (obviously) even lower. So if you aren't in need of extraordinary resolution on a large screen, current pixel counts are pretty close to 'enough', especially for screens that don't occupy huge portions of your field of view, so you don't need to factor increases(especially large, continuous increases) in resolution into the comparison.
So we are at least on the threshold where increases in resolution are done 'because we can' rather than 'because there are obvious benefits', for lots of devices. Plenty of people already don't see a whole lot of benefit in the move to HDTV; Ultra-HDTV or whatever is going to be an even harder sell, as the difference will only show up at very close distances or on very large screens(and plenty of people already have the largest screen that they want as furniture).
High resolution text is probably orthogonal to a discussion about ray tracing, and it seems to be the biggest current motivation for increasing display resolution.
Re:Raytracing is not the holy grail of graphics (Score:4, Informative)
Actually Pixar has switched to Ray Tracing. Cars was ray traced [pixar.com] [PDF]. Skimming through the whitepapers on the Pixar site [pixar.com], it's clear ray tracing was also used extensively in Ratatouille.
Even so, what Pixar is doing in feature films isn't particularly relevant to real-time ray tracing on mobile devices.
it's about memory, not performance or realism (Score:4, Informative)
It's worth pointing out (and it's mentioned in the paper you cite) that the main reason Pixar hasn't been doing much ray tracing until now is not performance or realism, but memory requirements. They need to render scenes that are too complex to fit in a single computer's memory. Scanline rendering is a memory-parallel algorithm, ray tracing is not. So, they're forced to split the scene up into manageable chunks and render them separately with scanline algorithms.
This isn't an issue for games, which are going to be run on a single machine (perhaps with multiple cores, but they share memory).
Re:Inverse Moore's Law (Score:4, Informative)
And about scalability, you're right, of course; ray tracing does scale better with scene complexity than rasterization does, and as computing power increases it will make more and more sense to use ray tracing. However, the ray tracing vs. rasterization argument has been going on for decades now, and while ray tracing researchers always seem convinced that ray tracing is going to suddenly explode and pwn the world, it hasn't happened yet and probably won't for the forseeable future. Part of it is just market entrenchment: there are ray tracing hardware accelerators, sure, but who has them? And although I've never worked with one, I'd imagine they'd have to be a bit limited, just because ray tracing is a much more global algorithm than rasterization... I can't see how it'd be easy to cram it into a stream processor with anywhere near as much efficiency as you could with a rasterizer. On the other hand, billions are invested into GPU design every year, and even the crappiest computers one nowadays. With GPUs getting more and more powerful and flexible by the year, and ray tracing basically having to rely on CPU power alone, the balance isn't going to radically shift anytime soon.
For the record, although I do research with both, I prefer ray tracing. It's conceptually simple, it's elegant, and you don't have to do a ton of rendering passes to get simple effects like refraction (which are a real PITA for rasterization). But when these articles come around (as they periodically do on Slashdot) claiming that rasterization is dead and ray tracing is the future of everything, I have to laugh. That may happen but not for a good long while.
Games get the best stuff? (Score:2, Informative)
I'd prefer companies focus on decent vector graphics for applications before trying to move directly to ray tracing for games.
Really, nothing pushes hardware, er... harder, than games. Application GUI implementation is still in the stone age, even on mobile devices.