Intersecting With the Objects in Your Scene
The first and simplest step in a ray tracer is simply finding the objects in your scene and showing them on the screen. For each pixel in your image, you have to send out a ray from the “eye” (the position of the virtual camera) through that pixel in order to determine what object, if any, it intersects with. If the ray hits multiple objects, we simply take the closest one to the “eye.” The black areas are where the rays didn’t find any objects, while the colored circles are made up of pixels where the rays did find an object. While there is no lighting, shadows, reflections, or texture yet, at least we have a sense of the way objects are layered — objects nearer to us are on top of those further away.
Adding Some Lights
Now that we know where our objects are in the scene, it is a simple upgrade to figure out how the light is playing off of them. It turns out that the light we see is, for the most part, not dependent on where you are viewing it from (specular highlights are, however, but we aren’t that fancy yet). This means that the only two important factors left are the surface normal at our intersection point and the direction of the light rays. Using the angle between these two things, we can determine how much light falls on a particular pixel in the image. This makes the 2D image we computed above look much more realistic by adding a sense of depth.
In the above image, there are three lights positioned around your left shoulder. Although we do see the light from these lights, we are currently seeing too much light. In the above image, even if the lights were placed a million miles away, we would see the same amount of light. Obviously, this is not how light works in real life, so we have to add some factor to scale it down as the light moves further away. We also need to account for situation where an object is blocking the light source — we need to worry about shadows. Shadows are a rather simple add-on, given the functionality already present. This time, instead of sending a ray from our “eye”, we will send it from the point of intersection with the object towards the light. If we happen to find something in the way, we will then ignore that light. The resulting image looks much more realistic.
The image above is looking better, but it still looks a little flat. The spheres all look like they are painted with matte paint — they have absolutely no reflection or shininess. We won’t tackle the first yet, but we can rather easily tackle the second. As we mentioned before, our current lighting scheme doesn’t depend on the viewers position. However, as we also mentioned, there is one important component that does depend on it: specular highlights. When you look at a shiny surface, the highlights are a function of where the light is, the orientation of the object, and where you are as the viewer of the scene. With this in mind, we can compute the amount of light that should form this specular highlight. The result of this is shown below.
Reflections and Some Texture
For the sake of not making this post too long, I have combined these last two steps. Though these spheres take it to the extreme, most objects in real life reflect light from other objects. Even my desk reflects a little bit of what is sitting on it, though it may be hard to see at times. Since the spheres in this scene are very shiny (as we can tell from the sharp specular highlights), it follows that the reflections should be quite clear. In fact, the way the ray tracer has been coded, the reflections will be perfect. This is not true for very many objects in real life, but adding the ability to have fuzzier reflections slows down the rendering by a substantial amount, so we have not dealt with that here.
The second thing we have done is to add an image wrapping our spheres in order to give them a bit more texture than they had above. The resulting image looks much more realistic — the added texture gives the spheres a metallic look and feel, which is much better suited to the type of reflections and highlights we are generating. The final result is shown below.