Imagine a scene in 3D. All the objects are made from geometric structures with the basic building block of a triangle. By stringing together vast chains of triangles, you can build spheres, cylinders, blocks, and just about any other structure. And with the tools available to game artists today, you can use triangles to build very detailed objects, including people. teas_edited-1.jpgIn the raster pipeline, these triangles go through a number of steps in which each triangle - one at a time - is analyzed, plotted, colored, lighted, textured, and painted on the screen. The end result is a fully realized 3D scene, and today, some very convincing special effects can be added through the use of “shaders”, which are basically special programs written to change the way the render pipeline draws particular pieces of the scene. Today, rasterized video games are everywhere, and almost all of them offload some of the computational work by employing Graphics Processing Units, or GPUs.Read on at Intel's blog.
Ray-tracing, on the other hand, models a scene in terms of the rays of light that pass from each pixel into the eye of the viewer, rather than on the basis of triangles. The scene still contains many triangles, but this “geometry” is abstracted into data structures that resemble “trees”. In other words, you can travel along the trunk of the tree, onto smaller and smaller branches, until finally arriving at the “leaves”, which allows the overall complexity of the scene to be broken down into simpler and simpler pieces.
This adds a level of efficiency to the rendering mechanism that can make it very efficient. Consider, for example, the performance that Daniel Pohl was able to get in his Quake IV port to the Intel Ray-tracing engine.
The current state of real time ray-tracing
Posted on Monday, October 15 2007 @ 3:05 CEST by Thomas De Maesschalck