Skip to main content
Top

2000 | Book

Rendering with mental ray®

Author: Thomas Driemeyer

Publisher: Springer Vienna

Book Series : mental ray® Handbooks

insite
SEARCH

About this book

mental ray is the leading rendering engine for generating photorealistic images, built into many 3D graphics applications. This book, written by the mental ray software project leader, gives a general introduction into rendering with mental ray, as well as step-by-step recipes for creating advanced effects, and tips and tricks for professional users. A comprehensive definition of mental ray’s scene description language and the standard shader libraries are included and used as the basis for all examples.

Table of Contents

Frontmatter
Introduction
Abstract
This book contains a general and comprehensive introduction to rendering with mental ray®, based on a complete definition of its input scene format. It is intended for beginners to learn about rendering techniques supported by mental ray, as well as for advanced users who need information on how to achieve certain effects with mental ray while maintaining maximum performance. It can also be read as an introductory course into computer graphics, with an emphasis on rendering and its underlying concepts.
Thomas Driemeyer
Chapter 1. Overview
Abstract
This chapter presents a brief overview of the features and concepts used by mental ray.
Thomas Driemeyer
Chapter 2. Scene Construction
Abstract
The purpose of the mental ray rendering software is the generation of images from scene descriptions. A scene description is a high-level 3D “blueprint” of elements such as geometric objects, lights, and a camera that looks at the scene. Scenes can be created by writing an appropriate text file using a text editor, but in general scenes will be too complex for that and are created by modeling, animation, and CAD tools instead. However, This book uses simple scenes that can be typed in with a text editor, or taken from the sample scenes provided with the mental ray distribution.
Thomas Driemeyer
Chapter 3. Cameras
Abstract
Rendering a scene means looking at the scene from the viewpoint of a camera, calculating the view, and recording it in an image file. The camera is part of the scene, just like lights and geometric objects, and has its own instance that defines its position and orientation in 3D space, again just like lights and objects. Unlike objects, however, the camera instance must be attached to the root group, to prevent multiple instancing of the camera. The scene cannot be rendered from multiple viewpoints simultaneously. It is, however, possible to define multiple cameras and attach their camera instances to the root group; the one to be used must be passed to the render statement. See the simple cube example on page 26 for a very simple scene setup.
Thomas Driemeyer
Chapter 4. Surface Shading
Abstract
This chapter discusses surface properties of geometric objects. When the camera sees a geometric object in the scene, the mental ray rendering software needs to determine the color of every point on the object. In addition to the object color this may include:
  • illumination: objects lit by light sources appear brighter, and unlit objects and objects in shadow appear darker. See page 46.
  • texture mapping: instead of a constant object color, images can be wrapped around objects, like wallpaper or decals. Textures can be image files or procedural textures. See page 51.
  • environment mapping: this is a reflection simulation. Reflections of other objects are not visible, only reflections from an “environment” that wraps the entire scene like wallpaper on the inside of an infinite sphere around the scene. See page 99.
  • reflection: reflections allow general mirror effects. Reflections of other objects in the scene are seen on the surface. See page 102.
  • transparency and refraction: this allows see-through objects. See page 107.
  • bump mapping alters the surface normal to make the object surface appear as if it had more geometric detail than it actually does. See page 79.
Thomas Driemeyer
Chapter 5. Light and Shadow
Abstract
All scene examples in the preceding Surface Shading chapter defined light sources that illuminate objects in certain ways. A light source provides light whose effect is taken into account when illuminating surfaces or volumes. Two main kinds of illumination are distinguished:
  • local illumination
    is handled by material and volume shaders by computing the color of an illuminated surface or volume simply by considering the direction, strength, and color of incoming light and how it is reflected towards the viewer. Incoming light always comes directly from the light source (which can optionally be blocked by occluding objects, causing shadows). The examples in chapter 4.1 on page 46 demonstrate various kinds of local illumination.
  • global illumination
    complements local illumination by allowing material shaders to take into account indirect illumination from other lighted objects in the scene. It cannot be computed by directly adding up incoming light from light sources but requires a special preprocessing step that emits photons from light sources and follows them as they bounce around in the scene. This allows simulating natural effects that go far beyond local illumination. Global illumination is described in chapter 7.6 on page 173.
Thomas Driemeyer
Chapter 6. Volume Rendering
Abstract
The material shaders introduced on page 43 determine the color of the surface of a geometric object. The mental ray rendering software can also take into account the space between objects, with procedural volume shaders that control what happens when looking through it. In the simplest case, this can be a uniform fog that fades distant objects towards white. Volume shading is also the method of choice for
  • anisotropic fog, non-uniform fog banks
  • smoke, clouds
  • fire
  • visible light beams
  • fur and feathers
and all other effects that either have no solid substance or are otherwise difficult to model geometrically. In principle, volume shaders can be used for any interaction with rays, including geometric models. It is possible to write volume shaders that act as sub-renderers in their domain of space, performing object and volume intersection tests like mental ray does. However, in practice volume shaders are used for effects like those in the above list; mental ray can handle solid object intersections much more efficiently.
Thomas Driemeyer
Chapter 7. Caustics and Global Illumination
Abstract
All previous scenes used local illumination: every rendered point on an object surface or in a volume computes the amount of light that reaches it by querying each light source in turn, and adding up the contributions. This method only considers direct illumination because the light travels directly, in a straight path, from the light source to the illuminated point. Occlusion by shadow-casting objects in that path are considered.
Thomas Driemeyer
Chapter 8. Motion Blur
Abstract
Physical cameras expose their film for some period of time, during which moving objects in the scene may change their position, orientation, size, or shape. Moving objects leave a blurry “trail” on the film. In particular, the edges of moving objects become semi-transparent because the object was only occupying the blurred region for some fraction of the shutter open time. Motion blurring simulates these effects.
Thomas Driemeyer
Chapter 9. Contours
Abstract
Contour lines, also called ink lines, are used in cartoon animation to provide visual cues to distinguish objects and accentuate their shape, illumination, and spatial relations. Contour rendering in the mental ray rendering software supplements standard color rendering. It works by making decisions during rendering about where and how contour lines should be placed, and then, during postprocessing, drawing contours based on the results. Drawing can take place on top of the rendered color image, or in a blank color frame buffer (if only the contours are desired), or to a PostScript file. All stages of contour rendering are programmable with shaders.
Thomas Driemeyer
Chapter 10. Shaders and Phenomena
Abstract
Shaders are plug-in modules that are used in materials, light sources, cameras, and other elements to control a wide range of effects, from surface material, volume, and camera lens properties to compositing (figure 10.1). Custom shaders can be custom-written in C or C++ and loaded by the mental ray rendering software at runtime. See page 12 for a list of shader types.
Thomas Driemeyer
Chapter 11. Postprocessing and Image Output
Abstract
The two main uses for image files are textures and output images. Textures are described in detail on page 51. Output images are the result of a rendering operation, usually an RGBA color image containing the rendered image, but it is also possible to generate multiple output images and image types other than RGBA color.
Thomas Driemeyer
Chapter 12. Geometric Objects
Abstract
Geometry modeling is a complex domain. Normally, special modeling programs are required to create precise geometry data. This chapter provides a detailed description of modeling geometry with simple mi scene file objects, including some underlying mathematical concepts. It can be skipped on first reading.
Thomas Driemeyer
Chapter 13. Instancing and Grouping
Abstract
Instances are scene elements that place other elements such as objects, lights, cameras, and subgroups in the right place in 3D space where they can be rendered. Every instance references exactly one element to be instanced, plus additional information:
  • The element to be instanced may either be the quoted name of an object, light, camera, or instance group, or a geometry shader introduced with the geometry keyword. In the geometry shader case, the shader is called at scene preprocessing time just before rendering, and is expected to generate an object, light, camera, or instance group with which preprocessing can then proceed. Procedural elements are deleted automatically after rendering.
  • The transform keyword allows specification of a matrix that converts the parent coordinate space above the instance to the space of the instanced element. This matrix is optional but will nearly always be used because it provides the relation between world space (at the top of the scene graph) and object space (at the bottom of the scene graph). The camera is also placed in 3D space with an instance, so the camera instance transformation matrix converts world space to camera space.
  • The mot ion transform statement does a very similar thing, except that it specifies the motion transformation matrix, which gives rise to motion blur. If the motion matrix is omitted, it effectively defaults to an identity matrix. The alternate form motion off cancels any motion transformation inherited from above, which effectively nails the instanced element in place in the 3D world, as far as blurring is concerned, even if it belongs to a moving sub-scene. See chapter 8 for details on motion blurring.
  • The tag statement sets a label 1 in the instance, which can be used for identification purposes. The mental ray rendering software does not use it but makes it available to shaders, which can alter their behavior based on the label.
  • The data statement allows attaching user data2 to the instance. User data is not used by mental ray but can be accessed by shaders. See page 240 for details. If the null keyword is used instead of the name of a user data block, any existing user data block reference is removed from the instance; this is useful for incremental changes. If more than one data statement is specified, they are connected to a chain in the order specified. Shaders can traverse the chain.
  • The material statement allows. If the name of a material (not a material shader) is given, it replaces any inherited material with this one and propagates it down the instanced element or subgraph. For this and the other kinds of inheritance, see chapter 14.
  • The hide statement allows disabling the instance. If set to on, the instance and its contents are ignored, as if it and its reference in the parent instance group had been removed. This is useful for quick preview rendering of parts of the scene without massive changes to the database.
  • The visible, shadow, trace, caustic, and globillum flags are used for flag inheritance. They are propagated down the scene graph and override similar flags in geometric objects. See chapter 14 for details.
  • Finally, parameters may attached to an instance much like parameters can be defined for named shaders. If an instance does not require parameters, it is not required — and in fact not efficient — to use an opening parenthesis directly followed by a closing parenthesis as is done in shader definitions because that would store a null parameter block, as opposed to omitting the parameter block. Like shader parameters, instance parameters must be declared. The declaration used is the declaration of the inheritance shader specified in the options block, and it is the same for all instances in the scene.
Thomas Driemeyer
Chapter 14. Inheritance
Abstract
The previous section introduced multiple instancing, with instances that move a cube to multiple different locations in world space. This is only one of the two main purposes of instances. They also support inheritance. There are four different kinds that can be individually and independently specified in an instance:
  • Material inheritance propagates materials down the scene hierarchy to objects that do not specify their own materials.
  • Tagged material inheritance propagates material arrays down the scene hierarchy to objects that specify indices instead of materials. The indices select materials from the inherited material array.
  • Parameter inheritance allows attaching arbitrary typed parameters to instances, and propagating them down the scene hierarchy in configurable ways.
  • Flag inheritance propagates the visible, trace, shadow, caustics, and global illumination flags down the scene hierarchy.
Thomas Driemeyer
Chapter 15. Incremental Changes and Animations
Abstract
An animation consists of a sequence of frames. For example, to make a ball move from the left to the right, a sequence of images must be played in rapid succession, with the ball starting on the left side in the first frame and moving successively farther to the right in each successive image. The previous chapters showed how to generate a single image from a scene file. In principle, an animation can be generated by rendering a large number of such scenes, but this is not the most efficient method.
Thomas Driemeyer
Chapter 16. Using and Creating Shader Libraries
Abstract
This chapter is advanced material that can be skipped on first reading.
Thomas Driemeyer
Chapter 17. Parallelism
Abstract
Parallelism refers to the ability to use more then one processor at a time. There are two tyoes: thread parallelism and network parallelism.
Thomas Driemeyer
Chapter 18. The Options Block
Abstract
Every scene file must have an options block that specifies various operational modes of the mental ray rendering software. More than one options block may exist, but only one can be named in a render statement that initiates rendering. This section lists all available options. The page numbers in the tables refer to more detailed explanations in other chapters.
Thomas Driemeyer
Chapter 19. Quality and Performance Tuning
Abstract
This chapter provides guidelines for building high-quality scenes that can be rendered most efficiently with the mental ray rendering software, and provides hints for finding and fixing problems. It assumes familiarity and some experience with mental ray. This chapter can be skipped on first reading.
Thomas Driemeyer
Chapter 20. Troubleshooting
Abstract
This section lists common problems and their solutions.
Thomas Driemeyer
Backmatter
Metadata
Title
Rendering with mental ray®
Author
Thomas Driemeyer
Copyright Year
2000
Publisher
Springer Vienna
Electronic ISBN
978-3-7091-3697-3
Print ISBN
978-3-211-83403-9
DOI
https://doi.org/10.1007/978-3-7091-3697-3