Jump to content

Computer graphics lighting

From Wikipedia, the free encyclopedia
(Redirected from Diffuse term)

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

Light sources

[edit]

Light sources allow for different ways to introduce light into graphics scenes.[1][2]

Point

[edit]

Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance.[3] An example of a point source is a standalone light bulb.[4]

A directional light source illuminating a terrain

Directional

[edit]

A directional source (or distant source) uniformly lights a scene from one direction.[4] Unlike a point source, the intensity of light produced by a directional source does not change with distance over the scale of the scene, as the directional source is treated as though it is extremely far away.[4] An example of a directional source is sunlight on Earth.[5]

Spotlight

[edit]

A spotlight produces a directed cone of light.[6] The light becomes more intense as the viewer gets closer to the spotlight source and to the center of the light cone.[6] An example of a spotlight is a flashlight.[5]

Area

[edit]

Area lights are 3D objects which emit light. Whereas point lights and spot lights sources are considered infinitesimally small points, area lights are treated as physical shapes.[7] Area light produce softer shadows and more realistic lighting than point lights and spot lights.[8]

Ambient

[edit]

Ambient light sources illuminate objects even when no other light source is present.[6] The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene.[6] This source ensures that objects are visible even in complete darkness.[5]

Lightwarp

[edit]

A lightwarp is a technique of which an object in the geometrical world refracts light based on the direction and intensity of the light. The light is then warped using an ambient diffuse term with a range of the color spectrum. The light then may be reflectively scattered to produce a higher depth of field, and refracted. The technique is used to produce a unique rendering style and can be used to limit overexposure of objects. Games such as Team Fortress 2 use the rendering technique to create a cartoon cel shaded stylized look.[9]

HDRI

[edit]

HDRI stands for High dynamic range image and is a 360° image that is wrapped around a 3D model as an outdoor setting and uses the sun typically as a light source in the sky. The textures from the model can reflect the direct and ambient light and colors from the HDRI.[10]

Lighting interactions

[edit]

In computer graphics, the overall effect of a light source on an object is determined by the combination of the object's interactions with it usually described by at least three main components.[11] The three primary lighting components (and subsequent interaction types) are diffuse, ambient, and specular.[11]

Decomposition of lighting interactions

Diffuse

[edit]

Diffuse lighting (or diffuse reflection) is the direct illumination of an object by an even amount of light interacting with a light-scattering surface.[4][12] After light strikes an object, it is reflected as a function of the surface properties of the object as well as the angle of incoming light.[12] This interaction is the primary contributor to the object's brightness and forms the basis for its color.[13]

Ambient

[edit]

As ambient light is directionless, it interacts uniformly across all surfaces, with its intensity determined by the strength of the ambient light sources and the properties of objects' surface materials, namely their ambient reflection coefficients.[13][12]

Specular

[edit]

The specular lighting component gives objects shine and highlights.[13] This is distinct from mirror effects because other objects in the environment are not visible in these reflections.[12] Instead, specular lighting creates bright spots on objects based on the intensity of the specular lighting component and the specular reflection coefficient of the surface.[12]

Illumination models

[edit]

Lighting models are used to replicate lighting effects in rendered environments where light is approximated based on the physics of light.[14] Without lighting models, replicating lighting effects as they occur in the natural world would require more processing power than is practical for computer graphics.[14] This lighting, or illumination model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene.[15] There are two main illumination models, object oriented lighting and global illumination.[16] They differ in that object oriented lighting considers each object individually, whereas global illumination maps how light interacts between objects.[16] Currently, researchers are developing global illumination techniques to more accurately replicate how light interacts with its environment.[16]

Object oriented lighting

[edit]

Object oriented lighting, also known as local illumination, is defined by mapping a single light source to a single object.[17] This technique is fast to compute, but often is an incomplete approximation of how light would behave in the scene in reality.[17] It is often approximated by summing a combination of specular, diffuse, and ambient light of a specific object.[14] The two predominant local illumination models are the Phong and the Blinn-Phong illumination models.[18]

Phong illumination model

[edit]

One of the most common reflection models is the Phong model.[14] The Phong model assumes that the intensity of each pixel is the sum of the intensity due to diffuse, specular, and ambient lighting.[17] This model takes into account the location of a viewer to determine specular light using the angle of light reflecting off an object.[18] The cosine of the angle is taken and raised to a power decided by the designer.[17] With this, the designer can decide how wide a highlight they want on an object; because of this, the power is called the shininess value.[18] The shininess value is determined by the roughness of the surface where a mirror would have a value of infinity and the roughest surface might have a value of one.[17] This model creates a more realistic looking white highlight based on the perspective of the viewer.[14]

Blinn-Phong illumination model

[edit]

The Blinn-Phong illumination model is similar to the Phong model as it uses specular light to create a highlight on an object based on its shininess.[19] The Blinn-Phong differs from the Phong illumination model, as the Blinn-Phong model uses the vector normal to the surface of the object and halfway between the light source and the viewer.[14] This model is used in order to have accurate specular lighting and reduced computation time.[14] The process takes less time because finding the reflected light vector's direction is a more involved computation than calculating the halfway normal vector.[19] While this is similar to the Phong model, it produces different visual results, and the specular reflection exponent or shininess might need modification in order to produce a similar specular reflection.[20]

Global illumination

[edit]

Global illumination differs from local illumination because it calculates light as it would travel throughout the entire scene.[16] This lighting is based more heavily in physics and optics, with light rays scattering, reflecting, and indefinitely bouncing throughout the scene.[21] There is still active research being done on global illumination as it requires more computational power than local illumination.[22]

Ray tracing

[edit]
Image rendered using ray tracing

Light sources emit rays that interact with various surfaces through absorption, reflection, or refraction.[3] An observer of the scene would see any light source that reaches their eyes; a ray that does not reach the observer goes unnoticed.[23] It is possible to simulate this by having all of the light sources emit rays and then compute how each of them interact with all of the objects in the scene.[24] However, this process is inefficient as most of the light rays would not reach the observer and would waste processing time.[25] Ray tracing solves this problem by reversing the process, instead sending view rays from the observer and calculating how they interact until they reach a light source.[24] Although this way more effectively uses processing time and produces a light simulation closely imitating natural lighting, ray tracing still has high computation costs due to the high amounts of light that reach viewer's eyes.[26]

Radiosity

[edit]

Radiosity takes into account the energy given off by surrounding objects and the light source.[16] Unlike ray tracing, which is dependent on the position and orientation of the observer, radiosity lighting is independent of view position.[25] Radiosity requires more computational power than ray tracing, but can be more useful for scenes with static lighting because it would only have to be computed once.[27] The surfaces of a scene can be divided into a large amount of patches; each patch radiates some light and affects the other patches, then a large set of equations needs to be solved simultaneously in order to get the final radiosity of each patch.[26]

Photon mapping

[edit]

Photon mapping was created as a two-pass global illumination algorithm that is more efficient than ray tracing.[28] It is the basic principle of tracking photons released from a light source through a series of stages.[28] The first pass includes the photons being released from a light source and bouncing off their first object; this map of where the photons are located is then recorded.[22] The photon map contains both the position and direction of each photon which either bounce or are absorbed.[28] The second pass happens with rendering where the reflections are calculated for different surfaces.[29] In this process, the photon map is decoupled from the geometry of the scene, meaning rendering can be calculated separately.[22] It is a useful technique because it can simulate caustics, and pre-processing steps do not need to be repeated if the view or objects change.[29]

Polygonal shading

[edit]

Polygonal shading is part of the rasterization process where 3D models are drawn as 2D pixel images.[18] Shading applies a lighting model, in conjunction with the geometric attributes of the 3D model, to determine how lighting should be represented at each fragment (or pixel) of the resulting image.[18] The polygons of the 3D model store the geometric values needed for the shading process.[30] This information includes vertex positional values and surface normals, but can contain optional data, such as texture and bump maps.[31]

An example of flat shading
An example of Gouraud shading
An example of Phong shading

Flat shading

[edit]

Flat shading is a simple shading model with a uniform application of lighting and color per polygon.[32] The color and normal of one vertex is used to calculate the shading of the entire polygon.[18] Flat shading is inexpensive, as lighting for each polygon only needs to be calculated once per render.[32]

Gouraud shading

[edit]

Gouraud shading is a type of interpolated shading where the values inside of each polygon are a blend of its vertex values.[18] Each vertex is given its own normal consisting of the average of the surface normals of the surrounding polygons.[32] The lighting and shading at that vertex is then calculated using the average normal and the lighting model of choice.[32] This process is repeated for all the vertices in the 3D model.[2] Next, the shading of the edges between the vertices is calculated by interpolating between the vertex values.[2] Finally, the shading inside of the polygon is calculated as an interpolation of the surrounding edge values.[2] Gouraud shading generates a smooth lighting effect across the 3D model's surface.[2]

Phong shading

[edit]

Phong shading, similar to Gouraud shading, is another type of interpolative shading that blends between vertex values to shade polygons.[21] The key difference between the two is that Phong shading interpolates the vertex normal values over the whole polygon before it calculates its shading.[32] This contrasts with Gouraud shading which interpolates the already shaded vertex values over the whole polygon.[21] Once Phong shading has calculated the normal of a fragment (pixel) inside the polygon, it can then apply a lighting model, shading that fragment.[32] This process is repeated until each polygon of the 3D model is shaded.[21]

Lighting effects

[edit]
A reflective material demonstrating caustics

Caustics

[edit]

Caustics are an effect of light reflected and refracted in a medium with curved interfaces or reflected off a curved surface.[33] They appear as ribbons of concentrated light and are often seen when looking at bodies of water or glass.[34] Caustics can be implemented in 3D graphics by blending a caustic texture map with the texture map of the affected objects.[34] The caustics texture can either be a static image that is animated to mimic the effects of caustics, or a Real-time calculation of caustics onto a blank image.[34] The latter is more complicated and requires backwards ray tracing to simulate photons moving through the environment of the 3D render.[33] In a photon mapping illumination model, Monte Carlo sampling is used in conjunction with the ray tracing to compute the intensity of light caused by the caustics.[33]

Reflection mapping

[edit]

Reflection mapping (also known as environment mapping) is a technique which uses 2D environment maps to create the effect of reflectivity without using ray tracing.[35] Since the appearances of reflective objects depend on the relative positions of the viewers, the objects, and the surrounding environments, graphics algorithms produce reflection vectors to determine how to color the objects based on these elements.[36] Using 2D environment maps rather than fully rendered, 3D objects to represent surroundings, reflections on objects can be determined using simple, computationally inexpensive algorithms.[35]

Particle systems

[edit]

Particle systems use collections of small particles to model chaotic, high-complexity events, such as fire, moving liquids, explosions, and moving hair.[37] Particles which make up the complex animation are distributed by an emitter, which gives each particle its properties, such as speed, lifespan, and color.[37] Over time, these particles may move, change color, or vary other properties, depending on the effect.[37] Typically, particle systems incorporate randomness, such as in the initial properties the emitter gives each particle, to make the effect realistic and non-uniform.[37][38]

See also

[edit]

References

[edit]
  1. ^ "Light: The art of exposure". GarageFarm. 2020-11-11. Retrieved 2020-11-11.
  2. ^ a b c d e "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  3. ^ a b "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  4. ^ a b c d "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  5. ^ a b c "Understanding Different Light Types". www.pluralsight.com. Retrieved 2019-11-05.
  6. ^ a b c d "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  7. ^ Lagarde, Sebastien; de Rousiers, Charles (Summer 2014). Moving Frostbite to Physically Based Rendering 3.0. SIGGRAPH.
  8. ^ Pharr, Matt; Humphreys, Greg; Wenzel, Jakob (2016). Physically Based Rendering: From Theory to Implementation (3rd ed.). Morgan Kaufmann. ISBN 978-0128006450.
  9. ^ Vergne, Romain; Pacanowski, Romain; Barla, Pascal; Granier, Xavier; Schlick, Christophe (February 19, 2010). "Radiance Scaling for Versatile Surface Enhancement". Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games. ACM. pp. 143–150. doi:10.1145/1730804.1730827. ISBN 9781605589398. S2CID 18291692 – via hal.inria.fr.
  10. ^ https://visao.ca/what-is-hdri/#:~:text=High%20dynamic%20range%20images%20are,look%20cartoonish%20and%20less%20professional. [bare URL]
  11. ^ a b "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  12. ^ a b c d e Pollard, Nancy (Spring 2004). "Lighting and Shading" (PDF).
  13. ^ a b c "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  14. ^ a b c d e f g "LearnOpenGL - Basic Lighting". learnopengl.com. Retrieved 2019-11-08.
  15. ^ "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-08.
  16. ^ a b c d e "Global Illumination" (PDF). Georgia Tech Classes. 2002.
  17. ^ a b c d e Farrell. "Local Illumination". Kent University.
  18. ^ a b c d e f g "Computer Graphics: Shading and Lighting". cglearn.codelight.eu. Retrieved 2019-10-30.
  19. ^ a b James F. Blinn (1977). "Models of light reflection for computer synthesized pictures". Proc. 4th annual conference on computer graphics and interactive techniques: 192–198. CiteSeerX 10.1.1.131.7741. doi:10.1145/563858.563893
  20. ^ Jacob's University, "Blinn-Phong Reflection Model", 2010.
  21. ^ a b c d Li, Hao (2018). "Shading in OpenGL" (PDF).
  22. ^ a b c Li, Hao (Fall 2018). "Global Illumination" (PDF).
  23. ^ "Introducing the NVIDIA RTX Ray Tracing Platform". NVIDIA Developer. 2018-03-06. Retrieved 2019-11-08.
  24. ^ a b Reif, J. H. (1994). "Computability and Complexity of Ray Tracing"(PDF). Discrete and Computational Geometry.
  25. ^ a b Wallace, John R.; Cohen, Michael F.; Greenberg, Donald P. (1987). "A Two-pass Solution to the Rendering Equation: A Synthesis of Ray Tracing and Radiosity Methods". Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH '87. New York, NY, USA: ACM: 311–320. doi:10.1145/37401.37438. ISBN 9780897912273.
  26. ^ a b Greenberg, Donald P. (1989-04-14). "Light Reflection Models for Computer Graphics". Science. 244 (4901): 166–173. Bibcode:1989Sci...244..166G. doi:10.1126/science.244.4901.166. ISSN 0036-8075. PMID 17835348. S2CID 46575183.
  27. ^ Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile,"Modeling the interaction of light between diffuse surfaces", Computer Graphics, Vol. 18, No. 3. (PDF)
  28. ^ a b c Wann Jensen, Henrik (1996). "Global Illumination using Photon Maps Archived 2008-08-08 at the Wayback Machine" (PDF). Rendering Techniques ’96: 21–30.
  29. ^ a b "Photon Mapping - Zack Waters". web.cs.wpi.edu. Retrieved 2019-11-08.
  30. ^ "Introduction to Computer Graphics, Section 4.1 -- Introduction to Lighting". math.hws.edu.
  31. ^ "Vertex Specification - OpenGL Wiki". www.khronos.org. Retrieved 2019-11-06.
  32. ^ a b c d e f Foley. "Illumination Models and Shading" (PDF).
  33. ^ a b c "GPU Gems". NVIDIA Developer. Retrieved 2019-10-30.
  34. ^ a b c "Caustics water texturing using Unity 3D". www.dualheights.se. Retrieved 2019-11-06.
  35. ^ a b "Computer Graphics: Environment Mapping". cglearn.codelight.eu. Retrieved 2019-11-01.
  36. ^ Shen, Han-Wei. "Environment Mapping" (PDF).
  37. ^ a b c d Bailey, Mike. "Particle Systems" (PDF).
  38. ^ "Particle Systems". web.cs.wpi.edu. Retrieved 2019-11-01.