Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software

Intel Researchers Consider Ray-Tracing for Mobile Devices 120

An anonymous reader points out an Intel blog discussing the feasibility of Ray-Tracing on mobile hardware. The required processing power is reduced enough by the lower resolution on these devices that they could realistically run Ray-Traced games. We've discussed the basics of Ray-Tracing in the past. Quoting: "Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace. As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before. We believe that with Ray-Tracing, developers will have an opportunity to deliver more content in less time, because when you render things in a physically correct environment, you can achieve high levels of quality very quickly, and with an engine that is scalable from the Ultra-Mobile to the Ultra-Powerful, Ray-Tracing may become a very popular technology in the upcoming years."
This discussion has been archived. No new comments can be posted.

Intel Researchers Consider Ray-Tracing for Mobile Devices

Comments Filter:
  • by hvm2hvm ( 1208954 ) on Sunday March 02, 2008 @09:59AM (#22615170) Homepage
    nah, most games these days tend to focus on graphical and sound effects rather than playability. this trend is similar to the movies made en masse in Hollywood that have pretty good effects but lousy plots. most games i played on a mobile have low quality graphics but playability makes them worthwhile. what good is raytracing going to do if the game is hard to control or understand. many mobile devices don't have a good support for multiple keypresses at once.
  • by DigitAl56K ( 805623 ) on Sunday March 02, 2008 @10:50AM (#22615308)

    Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.
    Where did this "assurance" come from? Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions. Since Moore's law is about doubling processing power, but doubling the display resolution means quadrupling the number of pixels, you may find the relationship is in fact much closer than you'd think.
  • Re:prog10 (Score:5, Informative)

    by ByteSlicer ( 735276 ) on Sunday March 02, 2008 @11:04AM (#22615382)
    Man, I had to google that before I got it [cmdrtaco.net].
  • by Slarty ( 11126 ) on Sunday March 02, 2008 @11:17AM (#22615442) Homepage
    For games, at least, shadows don't need to be perfect. Neither do reflection and (especially) refraction. The goal is all about rendering something that looks plausible, not perfect (although it's a bonus if you can get it). For things like caustics, most people (and especially gamers) just aren't going to notice if the shadows or caustics or what-not are a tiny bit "off".

    Current rasterization approaches use a lot of approximations, it's true, but they can get away with that because in interactive graphics, most things don't need to look perfect. It's true that there's been a lot of cool work done lately with interactive ray tracing, but for anything other than very simple renderings (mostly-static scenes with no global illumination and hard shadows), ray tracers *also* rely on a bunch of approximations. They have to: getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)

    By the way, the claim that with a "physically correct environment, you can achieve high levels of quality very quickly" doesn't really make much sense. What's a "physically correct environment" and what is it about rasterization that can't render one? How are we defining "high levels of quality" here? And "very quickly" is just not something that applies much to ray tracers at the moment, especially in the company of "physically correct". :-)
  • by Grard Menfin ( 1178135 ) on Sunday March 02, 2008 @11:19AM (#22615452)
    For those interested in real-time raytracing, the latest beta version of POV-Ray [povray.org] has a neat (but experimental) RTR feature. The source is now available for Windows and Unix/Linux. There also demo scenes available (and another demo scene with pre-baked textures can be found here [oyonale.com]).
  • by igomaniac ( 409731 ) on Sunday March 02, 2008 @11:26AM (#22615504)
    If you want to know the future of real-time graphics, look at what Pixar and other animation and special effects houses are doing. None of them are using ray-tracing except to achieve specific effects in specific circumstances. The fact is that global illumination combined with scanline renderers simply produce better pictures with less computational requirements.
  • by node 3 ( 115640 ) on Sunday March 02, 2008 @11:57AM (#22615654)
    Moore's Law says the number of transistors in a certain area at a certain cost will double about every 18 months. This effectively seems to double computer speed every 18 months.

    Doubling the number of transistors on an LCD does not double the resolution (as you pointed out), it only multiplies each dimension by the square root of 2. Doubling the number of transistors on a CRT does nothing (well, maybe it gives you a more impressive OSD). But even limiting it to LCDs, it does not hold up. Display resolution does not follow Moore's Law. If it did, then just three years ago, a 30" LCD would be 1280x800, or that the current MacBook would be around 1900x1200.

    The reason for this is not that Moore's Law doesn't apply to LCDs, it probably does. What's happening is that instead of using that technology increase solely to make ever higher resolution displays, it's used to make ever cheaper and higher quality displays at the same, or marginally improved, resolutions.

    The thing you can directly measure with LCDs with regards to Moore's Law is dot pitch. Every 18 months or so (let's say 2 years as that's the outside figure), dot pitch would increase by the square root of 2. That means that the display elements in your OS would shrink over time, and something that was 1" square in 2000 would now be 0.25" square. That's just since 2000. Go 8 years back again, and displays would have to be such that those 1" square icons would have to be 4" across and 4" tall!

    Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions.
    That is outright false, as you are implying that graphic quality is not increasing beyond pixel resolution (since that's the point you are trying to disprove). In other words, if display resolution was keeping up with CPU power, pretty much in-step, then there would be no increase in polygon count, texture quality, etc, as all that would be happening is we'd be playing the original Doom with the same Doom quality, just at a higher resolution (or if you want to start with a 3d card rendered game, UT or take your pick of game from that era). But the fact is, game quality is increasing beyond just increasing the pixel count.

    What you're noticing is that high-end games seem to match high-end displays at similar frame rates. This is not because display technology is keeping up with the silicon that drives your games. It's because game companies make use of every available cpu and gpu cycle until a certain approximate frame rate is reached.
  • by a_claudiu ( 814111 ) on Sunday March 02, 2008 @12:23PM (#22615780)
    About the scalability of ray tracing vs. rasterization. ftp://download.intel.com/technology/itj/2005/volume09issue02/art01_ray_tracing/vol09_art01.pdf [intel.com]
  • by Anonymous Coward on Sunday March 02, 2008 @12:36PM (#22615852)
    This is a common meme, but it is mistaken. I'm sure you've noticed that Pixar's movies aren't yet photorealistic. Raytracing *is* the holy grail of graphics; in its most sophisticated form it basically amounts to a simulation of the actual physics of light propagation, and with monte carlo methods it can be solved, producing images that can truly be said to be indistinguishable from reality. The reason Pixar doesn't use it is that, believe it or not, Pixar has constraints on their rendering time. They can't spend days to render a single frame; they need to get movies out the door. But given a couple dozen more years of Moore's law, raytracing would be everyone's rendering algorithm of choice. It's simply the most general, flexible, and simple rendering algorithm possible, so, absent computational constraints, it's what everyone would use. I'd say that qualifies it as the "holy grail" of graphics, wouldn't you?
  • by maxume ( 22995 ) on Sunday March 02, 2008 @12:38PM (#22615872)
    Sort of. You only need as many pixels as the eye can see at the distance the display is used at(and maybe some extra for leaning in). If you jump through some hoops, you can come up with a resolution for a given distance:

    http://en.wikipedia.org/wiki/Eye#Acuity [wikipedia.org]
    http://www.dansdata.com/gz029.htm [dansdata.com]

    Piggy-backing on Dan's hand waving, 300 dpi at 1 foot is a decent rule of thumb, and waving my own hands, 1 foot is a reasonable minimum distance for a handheld device(I don't imagine most people holding something any closer than this for long periods of time, opinions may vary). So for a screen that is 5 x 10 inches, the benefits for going past 1500 X 3000 pixels rapidly diminish, especially for video/animation. For smaller screens, the pixel count is (obviously) even lower. So if you aren't in need of extraordinary resolution on a large screen, current pixel counts are pretty close to 'enough', especially for screens that don't occupy huge portions of your field of view, so you don't need to factor increases(especially large, continuous increases) in resolution into the comparison.

    So we are at least on the threshold where increases in resolution are done 'because we can' rather than 'because there are obvious benefits', for lots of devices. Plenty of people already don't see a whole lot of benefit in the move to HDTV; Ultra-HDTV or whatever is going to be an even harder sell, as the difference will only show up at very close distances or on very large screens(and plenty of people already have the largest screen that they want as furniture).

    High resolution text is probably orthogonal to a discussion about ray tracing, and it seems to be the biggest current motivation for increasing display resolution.
  • by Aidtopia ( 667351 ) on Sunday March 02, 2008 @01:22PM (#22616086) Homepage Journal

    Actually Pixar has switched to Ray Tracing. Cars was ray traced [pixar.com] [PDF]. Skimming through the whitepapers on the Pixar site [pixar.com], it's clear ray tracing was also used extensively in Ratatouille.

    Even so, what Pixar is doing in feature films isn't particularly relevant to real-time ray tracing on mobile devices.

  • by j1m+5n0w ( 749199 ) on Sunday March 02, 2008 @02:48PM (#22616584) Homepage Journal

    It's worth pointing out (and it's mentioned in the paper you cite) that the main reason Pixar hasn't been doing much ray tracing until now is not performance or realism, but memory requirements. They need to render scenes that are too complex to fit in a single computer's memory. Scanline rendering is a memory-parallel algorithm, ray tracing is not. So, they're forced to split the scene up into manageable chunks and render them separately with scanline algorithms.

    This isn't an issue for games, which are going to be run on a single machine (perhaps with multiple cores, but they share memory).

  • by Slarty ( 11126 ) on Sunday March 02, 2008 @02:58PM (#22616630) Homepage
    Sure, the rendering equation isn't ray tracing specific (it's a core graphics equation, independent of any one image generation method) but it's much easier to directly apply in ray tracing. There aren't many rasterization techniques that even attempt to solve it... the goal usually is just to add some ambient light effects which look like a plausible attempt at global illumination. AFAIK, even the latest, greatest game engines still stop short at something like baked-in ambient occlusion or screen-space darkening using the depth buffer. It looks cool, but physically accurate it ain't. It's much more natural to get "perfect" results in ray tracing, but that was kinda my point: getting those accurate results is pretty costly. If people don't notice the difference, why bother? Stick with the cheap approximation.

    And about scalability, you're right, of course; ray tracing does scale better with scene complexity than rasterization does, and as computing power increases it will make more and more sense to use ray tracing. However, the ray tracing vs. rasterization argument has been going on for decades now, and while ray tracing researchers always seem convinced that ray tracing is going to suddenly explode and pwn the world, it hasn't happened yet and probably won't for the forseeable future. Part of it is just market entrenchment: there are ray tracing hardware accelerators, sure, but who has them? And although I've never worked with one, I'd imagine they'd have to be a bit limited, just because ray tracing is a much more global algorithm than rasterization... I can't see how it'd be easy to cram it into a stream processor with anywhere near as much efficiency as you could with a rasterizer. On the other hand, billions are invested into GPU design every year, and even the crappiest computers one nowadays. With GPUs getting more and more powerful and flexible by the year, and ray tracing basically having to rely on CPU power alone, the balance isn't going to radically shift anytime soon.

    For the record, although I do research with both, I prefer ray tracing. It's conceptually simple, it's elegant, and you don't have to do a ton of rendering passes to get simple effects like refraction (which are a real PITA for rasterization). But when these articles come around (as they periodically do on Slashdot) claiming that rasterization is dead and ray tracing is the future of everything, I have to laugh. That may happen but not for a good long while.
  • by Waccoon ( 1186667 ) on Sunday March 02, 2008 @03:25PM (#22616792)

    I'd prefer companies focus on decent vector graphics for applications before trying to move directly to ray tracing for games.

    Really, nothing pushes hardware, er... harder, than games. Application GUI implementation is still in the stone age, even on mobile devices.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...