Skip to main content
Top

2008 | Book

Writing mental ray® Shaders

A Perceptual Introduction

insite
SEARCH

About this book

The word "render" isn't unique to the vocabulary of computer graphics. We can talk about a "watercolor rendering," a "musical rendering" or a "poetic rendering." In each of these, there is a transformation from one domain to another: from the landscape before the painter to color on paper, from musical notation to sound, from the associations in a poet's mind to a book of poetry. Figure 1.1: Czar's Waiting Room, Main Railway Station, Helsinki, Eliel Saarinen, 1910, watercolor. But the type of rendering that may come closest to what we mean when we talk about rendering in computer graphics is in architecture. Geometric blueprints and technical specifications of building materials are transformed in the architectural rendering into a picture of the building 1 Introduction as it will appear when construction is complete. In addition to the designs of the building's geometry and its visual characteristics, the artist chooses a point of view to depict the scene in perspective. This is a transformation of a description of imagined space into a picture of that space. In a watercolor by architect Eliel Saarinen (Figure 1.1), the effect of light on marble is demonstrated in a way that would be lost in even a careful reading of blueprints and descriptions of materials. A mere brushstroke of a particular color in a particular place paradoxically transforms the dull matte appearance of watercolor into the sheen of polished stone.

Table of Contents

Frontmatter

Introduction

Chapter 1. Introduction
Abstract
The word “render” isn’t unique to the vocabulary of computer graphics. We can talk about a “watercolor rendering,” a “musical rendering” or a “poetic rendering.” In each of these, there is a transformation from one domain to another: from the landscape before the painter to color on paper, from musical notation to sound, from the associations in a poet’s mind to a book of poetry.

Structure

Frontmatter
Chapter 2. The structure of the scene
Abstract
How do we describe the scene to be rendered in a manner that mental ray can understand? Notice that even in asking this question we’re already engaged in a metaphor—mental ray doesn’t understand anything; there is input and the program produces output. But to simplify our creation of the input, it would be better if it related to our own intentions behind the picture, not to the pattern of bits that will be read (another metaphor) by mental ray.
Chapter 3. The structure of a shader
Abstract
As we’ll see in the course of this book, shaders in mental ray are used throughout the rendering process and for very different purposes. For most shader types, there is a standard structure for the inputs and outputs that you can rely on when you are writing a new shader. To begin to make sense of that structure, we’ll start with a really simple idea for a shader and expand what it can do until we arrive at one of the basic structures for shaders in mental ray.
Chapter 4. Shaders in the scene
Abstract
In the previous two chapters, we saw the overall structure of the scene file and the structure of an individual shader. In this chapter, we’ll take a look at how they fit together—how shaders in the scene can be named, how their parameters and results can be connected, and how a set of shaders can be bundled together into larger structures that simplify the creation of complex shader effects.

Color

Frontmatter
Chapter 5. A single color
Abstract
As we saw in Chapter 3, a shader calculates some output value based on a standard set of input data and the parameters chosen for the shader. Now we’ll see how these concepts are implemented in C.
Chapter 6. Color from orientation
Abstract
In the previous chapter, we defined a shader that produced a constant color. Typically, the color of a surface will be dependent upon a number of factors—its shape, where it is, images that are used to define its color, and, as we’ll see in the next part of the book, Light, how the color of the surface can be represented by a simulation of the physics of illumination.
Chapter 7. Color from position
Abstract
As we saw in the last chapter, the state parameter passed to shaders contains information about the current point being rendered. The state includes the current position of that point, stored in the state structure as state->point, of type miVector. We can compare this position to three-dimensional points passed as parameters to a shader to vary the color of an object.
Chapter 8. The transparency of a surface
Abstract
In previous chapters, we have used variables in the state struct to calculate the color of the surface. We’ve also used a few of the mental ray library functions, like the mi-eval functions for parameters, and some utility functions, like mi_vector_dot. In this chapter, we use a function that examines the rendering environment through ray tracing.
Chapter 9. Color from functions
Abstract
So far, we’ve defined single colors for the surfaces of objects and used other aspects of the object (its orientation, for example) to create a variation of color. We can also use a mathematical function to define how the color varies across the object. For its arguments, this function will use numerical values associated with some aspect of the surface. In this case we say that there is a mapping from some surface parameter to the color value we are going to use in our shader.
Chapter 10. The color of edges
Abstract
To draw edges and create other non-photorealistic effects, mental ray provides four different shaders that work together to specify separate phases of contour rendering. In the previous chapters, all our shader functions used the same argument signature, with the rendering state and parameters from the scene as input to a single C function that calculated the result. The set of four contour shader types are the major exception in mental ray to this basic pattern.

Light

Frontmatter
Chapter 11. Lights
Abstract
In the previous chapters, we specified the color of an instance with a shader in the instance’s material. We can think of a shader as having a contract, the requirements for the shader given its use in the scene file. We define shaders for our simulation of lights, too, and the contract for the light shader is to define the color that is produced by the light.
Chapter 12. Light on a surface
Abstract
In our discussion of light shaders in the last chapter, we used a simple material shader that took the light into account to define the color of the surface. In this chapter, we’ll use our light shader with a variety of material shaders that approximate the affect of diffuse and mirror-like reflections from a surface.
Chapter 13. Shadows
Abstract
Chapter 11, Lights, introduced the idea of a sbader contract that defines the requirements for shaders of a given type. For example, to include the attenuation of a light by shadowing for direct illumination, a light calls the library function mi_trace_shadow.
Chapter 14. Reflection
Abstract
In traditional descriptions of reflection in computer graphics, the word “specular” has been used in a very general way to talk about the shiny quality of a surface. For example, the original paper on the Phong model uses “specular” in contrast to the diffuse component calculated by Lambert’s cosine law. Implementations of the Phong model also typically use “specular” to mean all non-diffuse reflection components. In mental ray, we reserve the word “specular” for mirror-like reflections, and use the word “glossy” for scattering reflective surfaces, like brushed aluminum or tarnished silver. In this sense, “glossy reflections” occur in a continuum between diffuse reflections at one end and specular reflections at the other.
Chapter 15. Refraction
Abstract
In Chapter 8, we explored the cumulative effects of multiple layers of transparency. In that shader, we assumed that the ray from the eye continues in a straight line through all the object instances to which our shader was attached.
Chapter 16. Light from other surfaces
Abstract
In all of our previous shaders that dealt with a simulation of light, we were only considering surfaces lit by direct illumination, in which there is a direct path from the light to that surface. This is only part of a full simulation of light in the physical world. We must also consider light that has first reflected from another surface and its contribution to the final rendered color of an object, that object’s indirect illumination. In discussions of rendering, the terms local illumination and global illumination are often used to differentiate between the relatively simple problem of determining a direct path from a surface to a light source and the much more involved methods required when light is bouncing everywhere within a scene.

Shape

Frontmatter
Chapter 17. Modifying surface geometry
Abstract
So far, our shaders have only been calculating color values—the type of the result argument has been a pointer to miColor. Many shaders depend upon the shape of a surface to calculate the way in which light determines the resulting color value. By modifying the surface geometry in a shader, we can affect the calculation of that color. In a displacement shader, the result is a miScalar that defines how far along the surface normal the current point should be moved. Displacement shaders can provide geometric detail at a scale that would be very difficult to model directly.
Chapter 18. Modifying surface orientation
Abstract
In the last chapter, we used displacement shaders to modify the surface geometry, with the normals adjusted by mental ray to account for the new orientation of the surface. In this chapter, we’ll look at a technique that doesn’t change the geometry of the surface as we did with displacement mapping, but only modifies the normal’s description of the surface’s orientation. Because we can use this technique to map a set of orientation changes to the surface, and these changes can create a bumpy look to the surface, this technique has traditionally been called bump mapping—we are creating a mapping from positions on the surface to a change of the apparent orientation at that point.
Chapter 19. Creating geometric objects
Abstract
So far, all of our mental ray API library functions have come from the library declared in the header file shader.h, like the illumination model functions in Chapter 12 to calculate the specular component—mi_phong_specular, mi_blinn_specular, and friends. Shaders can also call functions in mental ray’s geometry shader API, declared in geoshader.h. These functions mirror the language of the mi scene file, and can be used to define geometry shaders that create geometric data. Like displacement shaders, the role of the geometry shader isn’t to “shade.” Geometry shaders add elements to the geometric component of the scene database constructed by mental ray before the actual rendering begins. Some objects lend themselves to procedural construction, either because they are based on pre-existing data (CAD data that is translated into mental ray’s object representation) or because they use quasi-random techniques to model natural phenomena, like plants and trees.
Chapter 20. Modeling hair
Abstract
In the previous chapter, we created geometric objects not by parsing a scene file, but through calls to the functions defined in the geometry shader API and declared in geoshader .h. At first blush, it hardly seems worth the trouble to make a handful of triangles with all the complexity of a geometry shader—a program written in a scripting language like Python could generate a scene file that would accomplish the same thing.

Space

Frontmatter
Chapter 21. The environment of the scene
Abstract
In Chapter 14 we used an environment shader to specify a color for reflecting rays that left the scene without striking any other objects. Environment shaders simply return a color, but in this chapter we will also provide optional initialization and cleanup functions that will assist in the environment shader’s calculations. These optional shader functions can be defined for any shader type.
Chapter 22. A visible atmosphere
Abstract
In previous chapters, we have attached shaders to the instances of geometric objects and to the camera. As a ray proceeds from our eye (the camera), we intersect geometric surfaces. Shaders associated with that surface can specify its color and the modification of its geometric properties. However, for natural phenomena like fog and smoke, no similar geometric surfaces exist. Or at least, no geometric surfaces that we could effectively model exist—we hardly want to create individual geometric objects for the enormous number of microscopic water particles that a cloud contains.
Chapter 23. Volumetric effects
Abstract
In the last chapter, we attached a volume shader to the camera, giving us a final opportunity to modify the eye ray color before the sample is stored for later filtering to create a pixel. In this chapter, we’ll attach volume shaders to objects, transforming them into three-dimensional regions within which we can control the modification of the background color as the ray passes through the object.

Image

Frontmatter
Chapter 24. Changing the lens
Abstract
In most of the scenes we’ve been rendering, we easily slip into a photographic metaphor of image-making, with the camera producing an objective record of the objects before it. In this chapter, we’ll modify the behavior of the camera itself by changing its initial creation of rays in a lens shader.
Chapter 25. Rendering image components
Abstract
During the course of rendering, mental ray stores information in a set of predefined frame buffers. The typical output statement in the camera block writes color image data to a file on disk. You can define additional frame buffers for use in shaders, storing data in them and accessing that data in other shaders.
Chapter 26. Modifying the final image
Abstract
All of our scenes have saved the result of rendering to a file on disk with an output statement in the camera block. This was also the mechanism we used to write out frame buffer data in the previous chapter. Writing files is a special case of output sbaders, a process that executes after the samples have been filtered to create pixels.
Backmatter
Metadata
Title
Writing mental ray® Shaders
Author
Andy Kopra
Copyright Year
2008
Publisher
Springer Vienna
Electronic ISBN
978-3-211-48965-9
Print ISBN
978-3-211-48964-2
DOI
https://doi.org/10.1007/978-3-211-48965-9