Please visit our Recent Documents page to see all the latest whitepapers and conference presentations that can help you with your projects.

*David McAllister NVIDIA*

The spatial bidirectional reflectance distribution function (SBRDF) represents the reflectance of a surface at each different point, for each incoming and each outgoing angle. This function represents the appearance of the surface in a very general, detailed way. This chapter discusses a compact SBRDF representation as well as rendering approaches for SBRDFs illuminated by discrete lights or by environment maps.

An SBRDF is a combination of texture mapping and the bidirectional reflectance distribution function (BRDF). A texture map stores reflectance, or other attributes, that vary spatially over a 2D surface. A BRDF stores reflectance at a single point on a surface as it varies over all incoming and outgoing angles. See Figure 18-1.

Figure 18-1 The SBRDF Domain

Real surfaces usually have significant variations, both spatially over the surface and angularly over all light and view directions. Thus, to represent most surfaces more realistically, we should combine BRDFs with texture mapping. This combination is the SBRDF. See McAllister 2002 for more details.

The most straightforward SBRDF representation is to store, at each texel of a texture map, all the parameters of that point's BRDF. Any BRDF representation could be used, but one that's compact and works well for hardware rendering is the Lafortune representation (Lafortune 1997). This representation consists of a sum of terms:

where *r _{d}
* is the diffuse reflectance. The terms in the summation are specular lobes. Each lobe

This equation can be thought of as a generalized dot product, in which the three terms are scaled arbitrarily:

See Figure 18-2.

Figure 18-2 A BRDF (Green) Composed of Three Lobes (Blue) and a Diffuse Component (Orange)

The Lafortune representation is evaluated in local surface coordinates. The *x* axis and *y* axis are in the direction of anisotropy (the scratch or thread direction), and the *z* axis is the normal. Defining the matrix *C* as *C _{x}
* = -1,

The Lafortune representation's flexibility to aim and scale each scattering lobe is key to successfully using only a few lobes to approximate the BRDFs. This property also enables the glossy environment mapping technique described in Section 18.4. The Lafortune representation is well suited to the shape BRDFs typically have. It's compact and is capable of representing interesting BRDF properties, such as the increasing reflectance of Fresnel reflection, off-specular peaks, and retroreflection.

The *r* albedo values are RGB triples, but the *C _{x}
*,

The SBRDF data can be measured from real surfaces (McAllister 2002) or be painted by an artist using a custom-written paint program. The simplest painting approach is to use a palette of existing BRDFs and paint them into an SBRDF texture map. Palette entries can blend existing BRDFs, can be sampled from the SBRDF database (McAllister 2003), or can be defined using a dialog box to set the *r*, *C _{x}
*,

The SBRDF is loaded into texture memory the same way for either rendering approach. The diffuse color, *r _{d}
*, is stored in one texture map (with its alpha channel available for alpha kill or other use). For each lobe,

The Cg shader for illuminating SBRDF surfaces by standard point or directional lights is simple, as shown in Listing 18-1. It is similar to a shader for the standard Phong model and hardware lights. One difference is that all BRDF parameters, rather than just the diffuse color, are sampled from texture maps. Another is that the evaluation of the dot product occurs in local surface coordinates, so that its components can be scaled by *C _{x}
*,

Figure 18-3 Rendering with Discrete Lights

NUM_LOBES 3#defineNUM_LIGHTS 2#define// Store Cx, Cy, Cz on range -1.3333 .. 1.3333.SCL (1.0 / (96.0 / 256.0))#defineBIAS (SCL / 2.0)#defineEXSCL 255.0#define// Scale exponent to be on 0 .. 255.// Rasterize the view vector and all the light vectors.// Pass the light colors as uniform parameters.fromrast {structTexUV :float2;TEXCOORD0// Surface texcoordsEyeVec :float3;TEXCOORD1// Vector to eye (local space)LightVec[NUM_LIGHTS] :float3;TEXCOORD2};// Lights (local space)main(fromrast I,float4uniformtex_dif,sampler2D// Diffusetex_lshp[NUM_LOBES],uniform sampler2D// Lobe shapetex_lalb[NUM_LOBES],uniform sampler2D// Lobe albedoExpos,uniform float4LightCol[NUM_LIGHTS] ) :uniform float3{COLOR// Load the BRDF parameters from the textureslobe_shape[NUM_LOBES];float4lobe_albedo[NUM_LOBES];float4(forp = 0; p < NUM_LOBES; p++) { lobe_shape[p] =float(tex_lshp[p], I.TexUV.xy) *f4tex2D(SCL, SCL, SCL, EXSCL) -float4(BIAS, BIAS, BIAS, 0.0); lobe_albedo[p] =float4(tex_lalb[p], I.TexUV.xy); }f4tex2Ddif_albedo =float4(tex_dif, I.TexUV.xy);f4tex2D// Vector to eye in local space.toeye =float3(I.EyeVec.xyz);normalize// Accumulate exitant radiance off surface from each lightexrad =float3(0, 0, 0);float3(forl = 0; l < NUM_LIGHTS; l++) {int// Vector to light in local space.tolight =float3(I.LightVec[l].xyz);normalize// Evaluate the SBRDF for this point and direction pairrefl = dif_albedo.xyz;float3(forp = 0; p < NUM_LOBES; p++) {float// Shear eye vectorCwr_local = lobe_shape[p].xyz * toeye;float3thedot =float(Cwr_local, tolight); refl = refl +dot(thedot, thedot, lobe_shape[p].w).z * lobe_albedo[p].xyz; }lit// Irradiance for this light (incident radiance times NdotL).NdotL =float(0, tolight.z);maxirrad = LightCol[l] * NdotL;float3exrad += refl * irrad; }// Reflectance times irradiance is exitant radiance.final_col = exrad.xyzz * Expos.xxxx;float4final_col.w = dif_albedo.w;// Set HDR exposure.// Put alpha from the map into pixel.final_col; }return

*C _{x}
*,

Besides being illuminated with point or directional lights, SBRDFs can be illuminated with incident light from all directions by using environment maps. The key is to convolve the environment map with a portion of the BRDF before rendering. For most BRDF representations, this must be done separately for each different BRDF. But because one SBRDF might have a million different BRDFs, doing so would be impossible.

Instead, we simply convolve the environment map with Phong lobes that have a selection of different specular exponents—for example, *n* = 0, 1, 4, 16, 64, and 256*.* These maps can be stored in the various layers of a cube mipmap, with the *n* = 0 map at the coarsest mipmap level, and the *n* = 256 map at the finest mipmap level. The *n* value of the SBRDF texel then indicates the level of detail (LOD) value used to sample the appropriate mipmap level of the cube map.

An alternative representation of the set of convolved environment maps is a single 3D texture, with the *s* and *t* map dimensions mapping to a parabolic map (Heidrich and Seidel 1999) and the *r* dimension mapping to *n*.

The derivation begins by expressing the radiance of the pixel being shaded, *L*(*w _{r}
*), as separate diffuse and specular terms of the Lafortune BRDF:

with W*
_{i}
* representing integration over the incident hemisphere. The incident radiance,

*D*(**N**) is independent of the BRDF, so it is precomputed once for all objects that are to reflect the environment map *L _{i}
*(

*p _{j}
*(

The environment map convolved with the specular lobe is:

This equation creates the convolved environment maps, given the original environment map. As mentioned, this map is parameterized both on the incident kernel peak direction *w _{p}
* and on the exponent

Figure 18-4 The Incident Lobe Peak for

S(float3Wp,float3n) {floatSum = 0.xyz;float3WgtSum = 0;float(forsi = 0; si < 6; si++) {int(foryi = 0; yi < YMAX; yi++) {int(forxi = 0; xi < XMAX; xi++) {intWi = VecFromCubeMap(xi, yi, si);float3dp =float(Wp, Wi);dot(dp < 0)if;continuelobe_shape =float(dp, n);powFN = FaceNormal(si);float3// Scale irradiance by length of cube vector to// compensate for irregular cube map distribution.Irrad = Li(Wi) * dp / length2(Wi); Sum += Irrad * lobe_shape; WgtSum += lobe_shape;float3} } }// Compute volume of lobeSum / WgtSum; }return// Convolve the cube map Li with a Phong lobe of exponent nPrecomputeEnvMap(CubeMapn) {floatSmap;CubeMap// Loop over all the pixels of the cube map(fors = 0; s < 6; s++) {int(fory = 0; y < YMAX; y++) {int(forx = 0; x < XMAX; x++) {intWp = VecFromCubeMap(x, y, s); Smap(x, y, s) = S(Wp, n); } } }float3Smap; }return

The final formulation used in hardware rendering is:

The
factor arises because *S* is computed with a normalized *w _{p}
*, so the incident radiance must still be scaled by the magnitude of the lobe. This equation is only an approximation, because the irradiance falloff

So with one environment map lookup and a few multiplies per lobe, we can render any number of independent BRDFs illuminated by the same set of arbitrary environment maps.

Listing 18-3 shows the sample code for rendering with preconvolved environment maps. It calls the function `f3texCUBE_RGBE_Conv(`
*env_tex*, *dir*, *n*
`)`, which samples the preconvolved environment map at the given direction with an LOD computed based on *n*.
See Figure 18-5 for the result.

Figure 18-5 Rendering with an Environment Map

To compactly represent incident radiance values greater than 1.0, the RGBE representation can be used, with the decoding to `float3` also performed within this function.

NUM_LOBES 2#define// Store Cx, Cy, Cz on range -1.3333 .. 1.3333.SCL (1.0 / (96.0 / 256.0))#defineBIAS (SCL / 2.0)#defineEXSCL 255.0#define// Scale exponent to be on 0 .. 255.ApplyLobe(float3toeye,float3lobe_shape,float4lobe_albedo,float3Tan,float3BiN,float3Nrm,float3tex_s) {uniform samplerCUBE// Reflect the eye vector in local surface space to get p(wr)Cwr_local = lobe_shape.xyz * toeye;float3// Transform lobe peak to world space before env. lookupCwr_world = ToWorld(Cwr_local, Tan, BiN, Nrm);float3// Sample the cube map at the lobe peak direction.sharpness = lobe_shape.w;float// This is n.incrad = f3texCUBE_RGBE_Conv(tex_s, Cwr_world, sharpness);float3// (length^2)^(n*0.5) = length^nlobelen =float(pow(Cwr_world, Cwr_world), sharpness * 0.5);dot// Approximate the irradiance falloff at all points// by that of the peak dir.// This is N dot Cwr in local space.irrad = incrad * (Cwr_local.z * lobelen);float3radiance = irrad * lobe_albedo;float3radiance; }returnfromrast {structTexUV :float2;TEXCOORD0// Surface texcoordsEyeVec :float3;TEXCOORD1// Vector to eye (local space)// - needs normalizationNrmVec :float3;TEXCOORD2// Normal (world space)// - needs normalizationTanVec :float3;TEXCOORD3// Tangent (world space)};// - needs normalizationmain(fromrast I,float4tex_dif,uniform sampler2D// Diffusetex_lshp0,uniform sampler2D// Lobe shapetex_lalb0,uniform sampler2D// Lobe albedotex_lshp1,uniform sampler2D// Lobe shapetex_lalb1,uniform sampler2D// Lobe albedotex_envd,uniform samplerCUBE// Cube diffuse env maptex_envs,uniform samplerCUBE// Cube specular env mapExpos) :uniform float4{COLOR// Preload all the BRDF parameters.lobe_shape[NUM_LOBES];float4lobe_albedo[NUM_LOBES];float4(forp = 0; p < NUM_LOBES; p++) { lobe_shape[p] =float(tex_lshp[p], I.TexUV.xy) *f4tex2D(SCL, SCL, SCL, EXSCL) -float4(BIAS, BIAS, BIAS, 0.0); lobe_albedo[p] =float4(tex_lalb[p], I.TexUV.xy); }f4tex2Ddif_albedo =float4(tex_dif, I.TexUV.xy);f4tex2DNrm =float3(I.NrmVec.xyz);normalizeTan =float3(I.TanVec.xyz);normalizeBiN =float3(Nrm, Tan);cross// Vector to eye in local space.toeye =float3(I.EyeVec.xyz);normalize// Accumulate exitant radiance off surface due to each lobeexrad =float3(0, 0, 0);float3(forl = 0; l < NUM_LOBES; l++) exrad += ApplyLobe(toeye, lobe_shape[l], lobe_albedo[l].xyz, Tan, BiN, Nrm, tex_envs);int// Add the diffuse term. tex_envd contains the irradiance// for a surface with normal direction Nrm.irrad = f3texCUBE_RGBE(tex_envd, Nrm); exrad = exrad + dif_albedo.xyz * irrad;float3final_col = exrad.xyzz * Expos.xxxx;float4final_col.w = dif_albedo.w;// Set HDR exposure.// Put alpha from map into the pixel.final_col; }return

The performance of the discrete light shader is remarkably good considering the great generality of SBRDFs. It consists mainly of a dot product and a `lit()` computation for each lobe and for each light.

Likewise, the performance of the environment map shader is quite good considering that it allows every texel to have a completely different BRDF but still be illuminated by environment maps. The core of this shader is simply the environment map texture lookup, but unfortunately the computation of the lookup coordinates requires a matrix transform. For both shaders, much of the math can easily be performed at half precision with no visual differences.

For simplicity, I omitted spatially varying direction of anisotropy from this chapter. However, the details can be found in McAllister 2002, together with sample code for convolving environment maps, the vertex shaders corresponding to these fragment shaders, and more.

Lafortune, E. P. F., S.-C. Foo, et al. 1997. "Non-Linear Approximation of Reflectance Functions." In *Proceedings of SIGGRAPH 97*, pp. 117–126.

McAllister, David K. 2002. "A Generalized Surface Appearance Representation for Computer Graphics." Department of Computer Science, University of North Carolina, Chapel Hill. Available online at **
****http://www.cs.unc.edu/~davemc/Pubs.html
**

McAllister, David K., and Anselmo A. Lastra. 2003. "The SBRDF Home Page." Web site. **
****http://www.cs.unc.edu/~davemc/SBRDF
**

*The author would like to thank Ben Cloward, who modeled and textured the hotel lobby scene and acquired the HDR radiance map of the hotel lobby, and Dr. Anselmo Lastra, who was the author's dissertation advisor and contributed to all aspects of the work.*

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.

The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales. For more information, please contact:

U.S. Corporate and Government Sales

(800) 382-3419

corpsales@pearsontechgroup.com

For sales outside of the U.S., please contact:

International Sales

international@pearsoned.com

Visit Addison-Wesley on the Web: www.awprofessional.com

Library of Congress Control Number: 2004100582

GeForce™ and NVIDIA Quadro^{®} are trademarks or registered trademarks of NVIDIA Corporation.

RenderMan^{®} is a registered trademark of Pixar Animation Studios.

"Shadow Map Antialiasing" © 2003 NVIDIA Corporation and Pixar Animation Studios.

"Cinematic Lighting" © 2003 Pixar Animation Studios.

Dawn images © 2002 NVIDIA Corporation. Vulcan images © 2003 NVIDIA Corporation.

Copyright © 2004 by NVIDIA Corporation.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.

For information on obtaining permission for use of material from this work, please submit a written request to:

Pearson Education, Inc.

Rights and Contracts Department

One Lake Street

Upper Saddle River, NJ 07458

Text printed on recycled and acid-free paper.

5 6 7 8 9 10 QWT 09 08 07

5th Printing September 2007

Developer Site Homepage

Developer News Homepage

Developer Login

Become a

Registered Developer

Developer Tools

Documentation

DirectX

OpenGL

GPU Computing

Handheld

Events Calendar

Newsletter Sign-Up

Drivers

Jobs (1)

Contact

Legal Information

Site Feedback

Developer News Homepage

Developer Login

Become a

Registered Developer

Developer Tools

Documentation

DirectX

OpenGL

GPU Computing

Handheld

Events Calendar

Newsletter Sign-Up

Drivers

Jobs (1)

Contact

Legal Information

Site Feedback

- Copyright
- Foreword
- Preface
- Contributors
*Part I: Natural Effects*- Chapter 1. Effective Water Simulation from Physical Models
- Chapter 2. Rendering Water Caustics
- Chapter 3. Skin in the "Dawn" Demo
- Chapter 4. Animation in the "Dawn" Demo
- Chapter 5. Implementing Improved Perlin Noise
- Chapter 6. Fire in the "Vulcan" Demo
- Chapter 7. Rendering Countless Blades of Waving Grass
- Chapter 8. Simulating Diffraction
*Part II: Lighting and Shadows*- Chapter 9. Efficient Shadow Volume Rendering
- Chapter 10. Cinematic Lighting
- Chapter 11. Shadow Map Antialiasing
- Chapter 12. Omnidirectional Shadow Mapping
- Chapter 13. Generating Soft Shadows Using Occlusion Interval Maps
- Chapter 14. Perspective Shadow Maps: Care and Feeding
- Chapter 15. Managing Visibility for Per-Pixel Lighting
*Part III: Materials*- Chapter 16. Real-Time Approximations to Subsurface Scattering
- Chapter 17. Ambient Occlusio
*Chapter 18. Spatial BRDFs*- Chapter 19. Image-Based Lighting
- Chapter 20. Texture Bombing
*Part IV: Image Processing*- Chapter 21. Real-Time Glow
- Chapter 22. Color Controls
- Chapter 23. Depth of Field: A Survey of Techniques
- Chapter 24. High-Quality Filtering
- Chapter 25. Fast Filter-Width Estimates with Texture Maps
- Chapter 26. The OpenEXR Image File Format
- Chapter 27. A Framework for Image Processing
*Part V: Performance and Practicalities*- Chapter 28. Graphics Pipeline Performance
- Chapter 29. Efficient Occlusion Culling
- Chapter 30. The Design of FX Composer
- Chapter 31. Using FX Composer
- Chapter 32. An Introduction to Shader Interfaces
- Chapter 33. Converting Production RenderMan Shaders to Real-Time
- Chapter 34. Integrating Hardware Shading into Cinema 4D
- Chapter 35. Leveraging High-Quality Software Rendering Effects in Real-Time Applications
- Chapter 36. Integrating Shaders into Applications
*Part VI: Beyond Triangles*