Modelling Multiple Light Sources –

Realistically Rendering Multiple Lights

 with Variable Intensities.

 

 

 

Submitted in partial fulfillment of the requirements for the Bachelor of Science (Honours) degree of Rhodes University.

 

 

 

Melissa Ridderhof

November 2002

 

Acknowledgements

 

 

To my supervisor, Shaun Bangay, for showing me very simple solutions to very complex problems.

 

To my boyfriend Ivan, for providing support, for solving my Linux problems and for pointing out bugs in my code.

 

To Grant Miller, for listening to me complain, and for extensive advice and proof reading of my write-up.


 

Abstract

 

Lighting is a significant factor in rendering a realistic 3D scene to a 2D screen. Standard techniques for lighting, however, restrict the number of lights available to eight, which is sufficient for most situations. In some cases, however, a scene will require more lights to be rendered correctly.

 

In this paper, we explore the techniques used for lighting a scene, and the solutions proposed for handling multiple light sources. We restrict our solutions to providing the correct intensity of light, shading and shadowing. We implement a standard solution, virtualised light sources (VLS), and an application-computing solution based on a Binary Space Partition Trees (BSP).

 

The VLS system is relatively simple to implement and provides the expected results – correct intensity and shading but low quality shadows, with short start-up times and reasonable frame rates. The BSP system is more complicated and cannot cope with very complex scenes. It provides excellent shadowing as well as the correct intensity and shading. The start up times are far longer, but the frame rate is comparable.

 

Multiple lighting can be handled in a variety of ways, with different trade-offs. The optimal solution would be application specific.


Table of Contents

 

 

1      Chapter 1: Introduction. 1

2      Chapter 2: Rendering a Three-dimensional Scene. 3

2.1       The Mathematics of 3D Objects. 4

2.1.1        Points. 4

2.1.2        Lines. 4

2.1.2.1     Intersection of Two Lines. 4

2.1.2.2     Angle Between Lines. 7

2.1.3        Planes. 7

2.1.3.1     Position of Point to Plane. 7

2.1.3.2     Distance from Plane. 8

2.1.3.3     Intersection of Line and Plane. 8

2.1.4        Cones. 10

2.1.4.1     Point within Cone. 13

2.1.4.2     Angle between Ray and Cone Edge. 13

2.2       Hidden Surface Removal Techniques. 13

2.2.1        Backface Culling. 13

2.2.2        Painter’s Algorithm.. 14

2.2.3        Z-Buffer Algorithm.. 14

2.2.4        Scan-Line Algorithm.. 14

2.2.5        Area Subdivision Algorithms. 15

2.3       Types of Light 15

2.3.1        Emitted Light 15

2.3.2        Ambient Light 15

2.3.3        Diffuse Light 15

2.3.4        Specular Light 16

2.4       Shading Techniques. 16

2.4.1        Gouraud Shading. 16

2.4.2        Phong Shading. 16

2.4.3        Light Based Tesselation. 16

2.5       Shadow Generation Techniques. 16

2.5.1        Projection Shadows. 17

2.5.2        Modified Hidden Surface Removal 17

2.5.3        Shadow Volumes. 17

2.6       Non Real Time Systems. 17

2.6.1        Ray Tracing. 18

2.6.2        Radiosity. 18

2.7       Review.. 18

3      Chapter 3: Designing for Multiple Light Sources. 20

3.1       System Requirements. 20

3.2       Possible Multiple Light Solutions. 20

3.2.1        Application-Computing Lighting. 21

3.2.2        Lightmaps. 21

3.2.3        Virtualised Light Sources. 21

3.2.4        Real-time Radiosity. 21

3.2.5        Vertex Ray Tracing. 22

3.3       Systems Chosen. 22

3.4       Design Elements Common to both Systems. 22

3.4.1        Primitives. 22

3.4.1.1     OneVertex. 23

3.4.1.2     Triangle. 23

3.4.1.3     Colour 24

3.4.1.4     LightDescription. 24

3.4.2        Subdividing Polygons. 24

3.4.3        Storing a Polygon Mesh. 28

3.5       Structure. 29

3.6       Review.. 29

4      Chapter 4: Virtualised Light System.. 31

4.1       Overview.. 31

4.2       Standard Algorithms. 31

4.3       Adapting to Handle Variable Intensities. 32

4.3.1        Light Model 32

4.4       Adding Shadows. 32

4.4.1        Polygon Lies Between. 32

4.5       Final System.. 37

4.5.1        Problems Encountered. 37

4.5.2        Class Diagram.. 37

4.6       Possible Extensions. 38

4.7       Review.. 38

5      Chapter 5: BSP System.. 39

5.1       Overview.. 39

5.2       Standard Algorithms. 39

5.2.1        Constructing a BSP Tree. 40

5.2.2        Traversing a BSP Tree. 41

5.3       Adapting to Handle Variable Intensities. 41

5.3.1        Lighting Model 41

5.3.2        Light BSP Tree. 42

5.4       Adding Shadows. 42

5.4.1        Project To Plane. 42

5.4.2        Finding Overlap Vector 43

5.5       Some Time Optimisations. 43

5.5.1        Converting Light BSP Trees into PolygonLists. 44

5.5.2        Fast Clone. 44

5.5.3        Not Calculating below an Intensity. 44

5.6       Final System.. 44

5.6.1        Problems Encountered. 44

5.6.2        Class Diagram.. 46

5.7       Possible Extensions. 46

5.8       Review.. 49

6      Chapter 6: Results. 50

6.1       Shading. 50

6.1.1        VLS. 51

6.1.2        BSP. 52

6.2       Shadowing. 52

6.2.1        VLS. 52

6.2.2        BSP. 55

6.3       Time Taken. 57

6.3.1        Start-up Time. 57

6.3.2        Frame Rate. 57

6.4       Some BSP Scenes. 59

6.5       Review.. 60

7      Chapter 7: Conclusion. 61

7.1       Rendering a Lit Scene. 61

7.2       Multiple Light Source Solutions. 61

7.3       Comparison of Systems Implemented. 61

7.4       Future Research. 61

References. 62

Appendix A : Frame Rate Data. 62

Appendix B : Poster. Error! Bookmark not defined.

 


List of Figures

 

Figure 2‑1 Intersection of Two Lines. 5

Figure 2‑2 Intersection of Line and Plane. 9

Figure 2‑3 A Cone. 11

Figure 3‑1 OneVertex Class Diagram.. 23

Figure 3‑2 Triangle Class Diagram.. 23

Figure 3‑3 Colour Class Diagram.. 24

Figure 3‑4 LightDescription Class Diagram.. 24

Figure 3‑5 Subdividing Polygons. 26

Figure 3‑6 Generic Class Structure. 29

Figure 4‑1 Jordan Curve Theorem.. 36

Figure 4‑2 VLS Class Structure. 38

Figure 5‑1 BSP Class Structure. 46

Figure 5‑2 Reflection. 48

Figure 6‑1 VLS Shading. 51

Figure 6‑2 BSP Shading. 52

Figure 6‑3 VLS Without Shadows. 53

Figure 6‑4 VLS With Shadows. 53

Figure 6‑5 BSP Without Shadows. 55

Figure 6‑6 BSP With Shadows. 56

Figure 6‑7 Frame Rate. 57

Figure 6‑8 BSP Intersecting Polygons. 59

Figure 6‑9 BSP Multi-Coloured Polygons. 60

 

 


 

1         Chapter 1: Introduction

 

Computer graphics has come a long way from ping-pong and logo. Now we have immersive three-dimensional games, hyper-real teapots, and full-length computer animated movies. The drive in computer graphics has always been for better quality graphics at greater speeds. Pac-man was perfectly adequate a few frames a second, but connoisseurs of 3-D games such as quake or unreal tournament are now unsatisfied with refresh rates of 60 frames. We are fast approaching a requirement for accuracy and speeds beyond the threshold of human perception. Hardware has taken great leaps forward to support this demand, moving much of the processing to the graphics card. Hardware acceleration is a pre-requisite for many recent games. In fact, 3-D chips frequently use these 3-D games as benchmarks. Alongside the improvement of hardware, extensive research has been performed in developing and improving algorithms to render graphics.

 

One of the prevalent tendencies has been to remove the responsibility of calculation from the programmer to the package, and then from the package to the hardware. This simplifies application development, and more importantly, speeds up run times. But hardware cannot support infinite operations and some rule-of-thumb restrictions have developed. One of these is that cards are only required to support eight lights. Graphic APIs such as Open-GL echo this restriction in the interest of maintaining high speeds. There is nothing stopping a manufacturer from supporting more than eight lights, but this may well be a wasted resource – in the vast majority of cases, far fewer lights than that are needed. Clusters of lights – multiple bulbs in one fitting – can generally be approximated as a single light source with no loss of quality. In addition, brighter lights tend to drown out the effects of others nearby, and these secondary lights can often be safely ignored.

 

However there are cases where more than eight lights are genuinely needed. For instance, in a theatre we have as many as a hundred lights in a limited space and with comparable intensities, all of them with a significant effect on an object. Using this scenario as motivation, this project investigates methods of rendering scenes with multiple light sources. Open-GL, a popular graphics API, had been used throughout. We compare the results from the standard virtualised light source technique (explained in detail below) with those from a system that takes the effect of all lights into account. The scenario is limited to a static scene, but is required to cope with user manipulated changing intensities of lights. Both systems are investigated with regard to the accuracy of the shading and shadowing, the start up time and the frame rate.

 

Chapter 2 introduces the basic theory of rendering three-dimensional scenes.  We investigate some of the mathematics used in three-dimensional scenes. We then explore the options available for manipulating and lighting objects in three dimensions and rendering them to a two-dimensional screen, with specific focus on the techniques used in the two systems.

 

Chapter 3 discusses various solutions for multiple light source scenarios. We determine various design considerations, and explore the design of aspects that are common to both systems.

 

Chapter 4 explores the specific design and implementation of a standard technique – virtualised light sources.

 

Chapter 5 explores the specific design and implementation of the new system developed using BSP Trees.

 

Chapter 6 presents the output of the two systems. We examine the intensity of light and shading produced and the quality of the shadowing. We compare the time taken in each case, and have a look at a few scenes produced by the BSP scene.

 

Chapter 7 draws some conclusions about systems with multiple light sources, and indicates areas for further study.

 


 

2         Chapter 2: Rendering a Three-dimensional Scene

 

Any three-dimensional scene must be rendered to the screen in such a way that the user will find it satisfyingly true to life.  This involves various conceptual steps. Firstly, we consider what exists in a scene. Objects have to be placed in their correct positions with the correct dimensions. Secondly, we consider what we see of a scene. The further surfaces that are obscured by the closer surfaces need to be removed from the view plane – a process known as hidden surface removal. Lastly we consider how we see it. This is the process this project focuses on – the scene needs to be lit.

 

Lighting plays a very large part in making a two dimensional representation appear truly three-dimensional. There are many different factors that contribute to creating a correctly lit scene. They can be broadly divided into local lighting concerns – the effect of a light on an object – and global lighting concerns – the effect of objects on other objects in the presence of a light.

-                     Local lighting concerns include the intensity of the light when it reaches the surface of the object, the correct shading of the object to represent its geometrical structure, and specular highlights – the reflection of the light off smooth surfaces.

-                     Global lighting concerns include shadowing of one object by another, reflections of other objects, and refraction of light through semi-transparent objects.

 

We first look at various geometrical structures present in a three-dimensional scene. We then consider some techniques used to realise realistic real time systems, including hidden surface removal, shading techniques and shadow generation techniques. We conclude this chapter with a discussion of non-real time systems.

 

 


 

2.1      The Mathematics of 3D Objects

 

Polygonal representation is the most popular method for representing objects in computer graphics. It is the method supported by Open-GL, and as such will be the only representation considered here. Polygonal representation consists of a structure of vertices, each indicating a point in three-dimensional space. These vertices define polygons. Open-GL requires polygons to be concave and planar in order to render them correctly, and therefore triangles are commonly used – they meet these restrictions automatically. This form obviously matches angular objects better than curved objects, but the ease of manipulation outweighs any drawbacks. In addition, by using interpolative shading and increasing the number of polygons, a polygon mesh can become indistinguishable from a curved surface. The basic manipulations of polygons, such as translation, scaling, rotation, etc. are well understood, and can be found in any book on three-dimensional graphics, such as Alan Watt’s Fundamentals of Three-Dimensional Computer Graphics [Watt]. Some of the calculations used in the systems implemented in this project are less common, and have been included here for clarity.

 

2.1.1      Points

This is the basic unit of an object. A point is usually a vertex of a more complex polygon. It comprises of x, y and z components, specifying its relative distance from each of these three axes. Many graphic systems define x and y as the screen co-ordinates, and z as the depth, but the axes can be completely arbitrary without affecting manipulations. Manipulations of a polygon can be done on each vertex independently.

 

2.1.2      Lines

A line (or more accurately, a line segment) is a polygon of two vertices, but the term is often used in the sense of an edge – the line joining two adjacent vertices of a larger polygon. A line can be fully described by two end points; a ray (a line with a starting point that continues into infinity) is defined as a starting point and the direction of the line. 

 

2.1.2.1       Intersection of Two Lines

 

 

 

 

 

 

 

 

 

 


Figure 21 Intersection of Two Lines

 

The point of intersection between two line segments is frequently needed. Determining this in three dimensions is only slightly more complex than the two dimensional equivalent. We outline this algorithm with reference to the above diagram (Figure 2-1):

 

-         Let P1, P2, Q1, and Q2 denote the end points of the two line segments.

-         Then vectors u = P1-P0, v = Q1-Q0, and w = P0-Q0.

-         If the lines parallel (u and v in same or opposite directions), no intersection occurs.

-         Let p, q, and r indicate arbitrary x, y or z-axes such that the two lines are not coplanar parallel to the p-r or q-r planes.

-         Calculate the intersection parameters:

o       Let d = u.p*v.q – u.q*v.p

o       s = (v.p*w.q – v.q*w.p)/d

o       t = (u.p*w.q – u.q*w.p)/d

-         Calculate the intersection point, inter:

o       Inter = P0 + s*u, or Q0 +t*v.

-         If (P0.r + s*u.r) – (Q0.r + t*v.r) is not equal to zero  – no intersection occurs in the third dimension.

-         If the intersection is not within the line segments, no intersection occurs.

 

2.1.2.2       Angle Between Lines

We will also often need to find the angle between two lines. We discuss a simplified algorithm used to accomplish this:

-         let a be the normalised vector of one line, and b be the normalised vector of the second line.

-     then anglebetween = arccos(dot product of a and b);

 

2.1.3      Planes

Infinite planes are flat surfaces that extend in all directions. While rarely present in a scene, they are invaluable for various calculations. A plane can be fully described by a point on a plane, and a vector perpendicular to the plane. This can be compactly expressed in the form a, b, c, d, where a, b, and c are the components of the normalised normal vector, and a*x + b*y + c*z + d = 0. This form allows for other values to be easily calculated.

 

We now outline some of the more pertinent calculations that are performed on planes. We use these algorithms extensively in this project, primarily in constructing a Binary Space Partition (BSP) tree and in determining whether an object is exposed to a light source or whether it is obscured.

 

2.1.3.1       Position of Point to Plane

This important fact is needed frequently. To determine where a point P is in relation to the plane, find:

-         u = a*P.x + b*P.y +c*P.z + d

-         If u > 0

o       The point lies in front of the plane.

-         If u = 0

o       The point is in the plane

-         If u < 0

o       The point is behind the plane

 

2.1.3.2       Distance from Plane

Another vital piece of information in splitting polygons is the distance to the plane. This is therefore another significant algorithm. The shortest distance between a point P and the plane is simply:

-         Abs (a*P.x + b*P.y + c*P.z + d)

 

2.1.3.3       Intersection of Line and Plane

The intersection of a line and a plane is important not only in creating a BSP Tree, but also in determining whether segments are shadowed.

 

 

 

 

 

 

 

 

 


Figure 22 Intersection of Line and Plane

 

With reference to the above diagram (Figure 2‑2) we discuss the algorithm necessary to find the point of intersection between a line segment and a plane:

-         If the line is parallel to the plane, no intersection occurs.

-         If the end points q and r do not lie on either side of the plane (or ray points away from plane), no intersection occurs.

-         Calculate the intersection point, inter:

o       Let direction = q – r.

o       Let nDotD = dot product of normal and direction.

o       Let nDotQ = dot product of normal and point q.

o       Intersection = q + direction*(-(d + nDotL)/nDotD)  (note that intersection, q and direction are vectors, d, nDotL, and nDotD are scalars).

 

2.1.4      Cones

Cones are the next geometrical primitives we discuss. These are pertinent to our project because they are used to model the effect of the light source.

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Figure 23 A Cone

 

A 3D cone, starting from a point, tapers outwards regularly, into infinity. It is defined by a starting point, a direction, and the angle of spread (see figure 2-3).

 

We now discuss the operations that we perform on cones and make use of in our project. Our discussion is centred on the fact that we consider the cone to be well suited to representing a light source.

 

2.1.4.1       Point within Cone

From the perspective of our project, we need to determine whether an object is within the cone of a light source. To determine whether a point is within a cone:

-         Determine the vector between the point and the starting point of the cone.

-         Determine the angle between this vector and the direction of the cone.

-         If this angle is less than the spread angle, the point is within the cone. If it is greater, then it is not.

 

2.1.4.2       Angle between Ray and Cone Edge

This algorithm is relevant when we need to consider how close a ray is to the edge of the cone, in order to determine how much fall-off has occurred.

 

To find the angle between the edge of the cone and a ray from the starting point through a given point:

-         Check that the point is in front of the cone, else return an error.

-         Determine direction  - the vector between the point and the starting point of the cone.

-         Determine the angle between the direction and the direction of the cone.

-         Return the difference between this angle and the spread angle.

 

 

2.2      Hidden Surface Removal Techniques

 

When a three dimensional surface is projected onto a two dimensional screen, care must be taken to ensure that closer objects obscure objects that are further away, and not the other way round. This process is known as hidden surface removal. It is a well-explored field, and some of the major solutions are described here.

 

The algorithms that we discuss here are significant in creating real time systems, and present the alternatives available for a modified hidden surface removal algorithm to generate shadows, discussed in section 2.5.2.

 

2.2.1      Backface Culling

This is the practice of removing all faces on an object with normals pointing away from the viewer – the ‘back’ faces of the object. It is not a complete solution in itself, but is an easy test to perform and can speed up another algorithm significantly.

 

2.2.2      Painter’s Algorithm

This is conceptually the simplest of the algorithms, and paradoxically one of the hardest to implement.  The idea is to draw the objects furthest from view first, and then the objects closer by – like an oil painter on a canvas. One simply overwrites the obscured sections of the old polygon with the new polygon; no calculations are necessary. In order to do this, the polygons must first be sorted in a back-to-front depth order. However, if polygons are naively sorted by an average or farthest depth, there will be cases when the representation will be incorrect – an example where this becomes strikingly apparent is the case where two objects that are both partially in front of and behind each other. [Hill, pg 592]

 

These problems can be solved by using octrees or BSP Trees to sort the polygons. Octrees are hierarchical data structures, specifying occupancy of cubic volumes of space (also known as voxels). When projected to 2D screen space, a quad tree is generated. Binary Space Partition (BSP) trees divide the space adaptively and are thus potentially more efficient than octrees. In these systems, the ‘problem polygons’ will automatically be subdivided in the creation of the trees, and painter’s algorithm will yield the correct results in all cases. [Watt, pg 190] 

 

2.2.3      Z-Buffer Algorithm

Also known as the depth buffer algorithm, this technique is very popular. The technique can be inefficient and is very memory intensive, but makes up for these concerns in its lack of complexity – it is simple and fast, and requires no sorting of polygons.

 

If the scene is regarded as being a three-dimensional space, with the x and y co-ordinates making up the screen face, the equivalent z value indicates the depth of the object from the screen. For every pixel to be rendered, the z-value is interpreted and compared to the value in the z-buffer. If the value is less than the stored value, the new object in closer to the viewer than the old object. The pixel is overwritten with the new value appropriate to the polygon, and the new z-value is entered into the buffer. [Watt, pg 22]

 

The amount of memory dedicated to each pixel in the buffer limits the accuracy with which depth can be determined – if 8 bits are used, surfaces closer than 1/256 of the total range to each other may not be distinguished, and the wrong face may be drawn to screen. One solution for reducing memory requirements involves limiting the size of the buffer to a fraction of the screen, and rendering each fraction separately. [Hill, pg 594-596]

 

The Z-buffer is often supported in hardware, making this a very attractive option for rendering packages, and is the technique used by Open-GL.

 

2.2.4      Scan-Line Algorithm

This algorithm is adapted from a rendering technique that takes advantage of linear spatial coherence to reduce memory traffic. The next pixel to be rendered is very often the same colour as the pixel that has just been rendered. A colour is loaded, and each subsequent pixel is flooded with that colour until the run of the polygon ends. In the standard form, when a pixel that is contained by more than one polygon is encountered, a user-defined priority level is used to determine which polygon is rendered. When being used as a hidden surface removal technique, this priority reflects the relative depth of the two polygons at that pixel. [Hill, pg 597]

 

2.2.5      Area Subdivision Algorithms

Also known as Warnock algorithms, these methods take advantage of area coherence rather than simply linear coherence. The visible area is subdivided until each unit of area is ‘simple’ to render [Hearn & Baker, pg 268]. Each region is tested in turn to see if it meets the requirements for simplicity. If it is simple, it is drawn immediately. If not it is subdivided, and the process is repeated.

 

The various algorithms are variations on how ‘simple’ is defined and how the regions are subdivided. Quadrant subdivision, for example, defines ‘simple’ as an region with one face or no faces involved, or an region less than one pixel in area, and subdivides a problem region into four equal quadrants. Other definitions of ‘simple’ include:

-                     A region that is empty or less than one pixel in area – this simplifies the testing, but leads to very high levels of subdivision.

-                     A region with one dominant face (a face that is everywhere closer than other faces), or an empty region, or a region less than one pixel in area – testing for simplicity is much more complex, but subdivisions are limited.

[Hill, pg 600-607].

 

 

2.3      Types of Light

 

In this section we discuss the various categories of light that combine to determine how an object is lit in a scene. We consider the appearance of an object in a rendered scene to be produced by the light that it emits, ambient light in the scene, diffuse light (diffuse reflection of light from a light source) and specular light (specular reflection of light from a light source).

 

Here, we provide a short introduction to this common taxonomy.

 

2.3.1      Emitted Light

This is the light that is actually being emitted by the object. The object appears as a light source (although most systems allow the user to define emitters that will not be considered as light sources for other objects in the scene, and do not count towards the eight light maximum). An emitter is not affected by any other lights in the scene.

 

2.3.2      Ambient Light

Ambient light is a catchall term for the amount of light that is present in the scene that comes from no particular direction. Every lit surface reflects and scatters the light in all directions, in turn illuminating other surfaces. Each light that is added to a scene will therefore increase the ambient light of the scene. This chaotic light affects all surfaces roughly equally, and thus is simply added to each object.  [Bangay, pg 130] 

 

2.3.3      Diffuse Light

Diffuse light is the light that comes from a particular source and is then reflected in all directions. The intensity of diffuse light is a function of the relative angles of the surface and the direction of the light to the surface – a surface that is face on to the light will be very well illuminated, while a surface that is edge on may not be illuminated at all. It is independent of the viewpoint.

 

2.3.4      Specular Light

This is light that is reflected off a surface in a particular direction. The shinier the surface, the more apparent this effect is. It is a function of the angle of the viewer to the surface, as well as the surface to the light, and is thus viewpoint dependent.

 

 

2.4      Shading Techniques

 

Shading indicates the difference in the effect of a light over the surface of an object. The use of shading in our project is vitally important in rendering objects in the presence of light sources realistically.

 

Constant shading calculates a single intensity for each polygon in the polygon mesh, and applies this to the whole polygon, creating a very artificial effect [Bangay, pg 141]. More popular techniques, such as Gouraud and Phong shading use interpolative methods. We consider these methods in greater detail below.

 

2.4.1      Gouraud Shading

The intensity of light at a vertex is a function of the orientation of the vertex normal with respect to the light source and the eye. These values are interpolated linearly across the polygon interior. Specular highlights cannot be handled properly, and Mach bands can result because of discontinuities in derivatives across polygon boundaries. The quality of the image can be improved by increasing the number of polygons used to represent an object. [Watt & Watt, pg 23-24]

 

2.4.2      Phong Shading

This interpolates the normal vectors across the interior of the polygon, and calculates the lighting equation at each pixel. This gives accurate highlights, and reduces the Mach banding effect, but is much slower than Gouraud shading. [Watt, pg 24]

 

2.4.3      Light Based Tesselation

Tesselation is the process of adaptive subdivision in order to refine the image. Instead of requiring the user to divide the object sufficient for lighting to be accurate, the algorithm tessellates problem areas itself. [Andrews]

 

 

2.5      Shadow Generation Techniques

 

The next consideration in creating realistic lighting is that of shadows. Every object in a scene that is lit will cast a shadow. With this in mind we need to critically evaluate the various techniques involved in their generation.

 

Rather than attempting a true global light solution, we consider performance-enhancing techniques in which shadows are added to a scene algorithmically. Some major techniques that attempt this include:

 

2.5.1      Projection Shadows

This technique treats shadows as if they are themselves polygons. The silhouette edge of the object is projected from the light source onto another object. This is relatively easy to do if the second object is a flat surface, and becomes rapidly more complex with increasing complexity of the second object. The colour of this shadow polygon is also difficult to determine. [Bangay, pg 188]

 

2.5.2      Modified Hidden Surface Removal

The light point is treated as if it were the viewpoint. Standard hidden surface removal algorithms are then applied. All surfaces that are not visible to that light are marked as being shadowed by that light.

 

A popular version of this is the shadow z-buffer - the depth information is calculated as with normal z-buffer techniques, taking the light as viewpoint. During rendering, if the equivalent z value in the lighting space is not visible, the object is rendered with diminished intensity. [Watt & Watt, pg166-172]

 

In a scan line algorithm, all polygons in the scene are projected onto a sphere surrounding the point light. If a polygon does not overlap with any other polygon, it is rendered as normal. If another polygon completely covers it, the polygon is rendered with appropriately reduced intensity. If it is partly covered, the polygon is subdivided, and tested each subdivision is tested.

 

Lights in these algorithms are treated as ideal points, and thus the shadows produced have hard edges. [Watt & Watt, pg 158-160].

 

2.5.3      Shadow Volumes

This considers the volume of space that is shadowed, and tests objects to see whether they fall within this volume. A shadow volume is projected from the silhouette edge of an object and forms a semi-infinite cone pointing away from the light. Finding the silhouette edge of the object can only be easily done with polygonal objects that have not been generated parametrically, and therefore this technique is usually only applied in these cases. By generating a separate shadow volume from each vertex of an area light source, soft shadows can be reasonably well approximated. [Watt & Watt, pg 160-166].

 

 

2.6      Non Real Time Systems

 

Our discussion up until now has centred on techniques that are designed to render a scene in real time. Here, we change the focus of our attention to models that are more realistic. They are consequently more resource intensive than their real time counterparts.

 

These are systems that try to approximate full global lighting solutions. They attempt to render scenes using physical models of how light works. They are distinguished by their very long rendering times – usually hours, and some-times days, for complex scenes. The results are generally of far superior quality to the real-time approaches discussed in this paper. 

 

We now examine two of the most significant algorithms used to generate realistic and accurately lit scenes.

 

2.6.1      Ray Tracing

A ray is projected from every pixel on the display screen into the picture to see where it collides with objects. At a collision, two light rays are generated - one reflected off the object and one transmitted through the object. Further rays are generated from the point to every light source to see whether that point is visible to the light, with only the visible lights used to calculate colour of the object effectively creating shadows. This process is iterated to a certain depth, and the pixel colour is generated as a combination of the colours of the objects encountered by the rays. This method is used by POV-ray and Radience.

 

This method treats surfaces as perfectly smooth, and thus generates unnaturally sharp reflections. [Watt & Watt, pg 219-222]

 

2.6.2      Radiosity

The term radiosity refers to energy leaving a surface per unit area per unit time. The radiosity technique iterates through each patch (surface element), calculating energy received and emitted, effectively tracing the path of a light ray as it bounces around the world [Hall, pg 70-71] until the system reaches equilibrium. Light sources are simply surfaces with an initial non-zero radiosity. This method replaces the ambient factor with an accurate calculation of diffuse lighting. It handles colour bleeding – when a surface is lit indirectly, it tends to take on some of the colour of the objects the light is reflecting off. It renders shadows in a true and accurate manner, and is view independent. However, because it treats light intensity as being constant over the area of the patch, it handles specular highlights badly. It is also massively time intensive, often an order of magnitude greater than ray tracing. It works best in generating man-made environments where the view-independence is a significant advantage – a walk through of a proposed building for example. [Watt & Watt, pg 266-268]

 

 

2.7      Review

 

A large quantity of vector mathematics is used in manipulating 3-D objects. Structures such as points, lines, planes and cones are used to represent the data. Information such as intersections and the position of a point to another structure can be determined. Hidden surface removal techniques are used to ensure that polygons are rendered on the correct order with respect to the viewpoint. Many algorithms exist, of which the z-buffer is the most popular and is supported in hardware.

 

Lighting concerns include local effects – intensity, shading, and highlights – and global effects – shadowing, reflection and refraction. The intensity of light can be divided into emitted light, ambient light, diffuse light and specular light, each of which is handled separately by a lighting engine. Shading can be either flat, Gouraud or Phong. Gouraud is most commonly used because it is more realistic than flat, but not as time intensive as Phong. Phong handles highlights better than Gouraud. Shadows can be generated by creating new polygons that indicate the shadow, by using a hidden surface removal algorithm with the light source as origin to indicate the shadowed areas, or by testing each polygon to se whether it falls within the shadow of another polygon. Reflection and refraction, in addition to the other effects mentioned above, are handled by non real time systems. Ray-tracing tends to produce unnaturally sharp reflections. Radiosity is the only technique to handle colour-bleeding, but does not handle highlights well. Both are extremely time-intensive.

 

Having introduced the fundamental principles involved in rendering a three-dimensional scene, we now continue on to formulate the design for multiple light source solutions.


3         Chapter 3: Designing for Multiple Light Sources

 

In this chapter we discuss our approach to the problem of modelling multiple light sources. By necessity, we introduce this chapter with a discussion of the minimum requirements for such a system. Once we have defined our problem domain, we consider possible solutions. This allows us to fully explain the systems we chose to implement in this project. We chose to implement two solutions, our custom BSP Tree solution and the VLS solution as our control. We continue our discussion by considering common design elements in these systems. We then conclude this chapter by discussing the generic multiple light source class structure used.

 

 

3.1      System Requirements

 

We start this section with a discussion of the requirements for a lighting system, taking into consideration local and global lighting aspects.

 

A complete lighting system would handle all the local and global concerns discussed in the previous chapter, namely:

1.                  Intensity of light at a point

2.                  Shading

3.                  Highlights

4.                  Shadowing

5.                  Reflections

6.                  Refractions

 

However, even non-real time systems can rarely cope with all these requirements – radiosity cannot handle highlights, ray-tracing cannot handle diffuse reflections. In the interest of keeping the lighting solution close to real-time, only the most relevant features are implemented. The intensity of the light as it reaches the surface is obviously a pre-requisite for any lighting solution. Shading demonstrates the 3-D nature of the objects, and is therefore also required. Shadowing provides valuable verisimilitude. If we restrict the system to only containing matt, opaque objects, features 3, 5, and 6 can be safely ignored without much loss of image quality. Further, by not considering these issues, the system becomes view-independent.

 

Now that we have defined our problem domain, we can consider possible solutions.

 

 

3.2      Possible Multiple Light Solutions

 

True multiple lighting situations are rare. In the cases where more than eight lights exist are usually solved by using lightmaps or virtualised light sources. Recently, there have been attempts to modify global lighting solutions to produce real-time results.

 

Our investigation into this problem revealed five possible solutions. We discuss each of these in some detail before explaining our decisions about which to implement.

 

3.2.1      Application-Computing Lighting

The effect of the light is computed by the application and applied to the vertices as colours. An advantage of this scheme is that unlimited lights can be introduced. The main disadvantage is that hardware acceleration cannot be used.

 

 This approach offers quality at the cost of speed.

 

3.2.2      Lightmaps

Lightmaps are texture maps that contain information about the light pattern cast by a particular light. This creates an intensity scale, which is then transformed to match the object/scene and blended with the existing textures. Lightmaps can provide smoother light solutions than Gouraud shading and there is no limit as to how many can be added, although each one does introduce an extra time penalty. They are most effective in fairly static scenes. [Johansson]

 

3.2.3      Virtualised Light Sources

This is a per-object calculation of the eight brightest or closest lights. The lights that most affect each object are enabled dynamically as the scene is rendered. There is extra overhead time introduced by this, but the technique takes advantage of any hardware acceleration. The technique is only successful if eight or less lights truly do make a significant contribution to the object being rendered. In actual practice, the number of effective lights in a scene is usually significantly lower than eight. [Kilgard]

 

3.2.4      Real-time Radiosity

Radiosity has no restrictions as to number of lights, and thus a real-time version would provide a complete solution for a multiple light source scene. Radiosity is well known for being time-consuming. By reducing the accuracy of the radiosity calculations, the process can be dramatically sped up. The complexity used would become a trade-off between a solution that is fast and a solution that is satisfyingly ‘real enough’ to the viewer. A modification of environmental mapping techniques has been proposed, but not implemented:

-         Place a cube around the object origin, and render the scene onto each surface (ignoring the presence of the object itself).

-          Blur the cube view to take into account distortion, fading, and angle.

-         Map to UV co-ordinates, as in normal environmental mapping.

-         Use this to texture map the cube view onto the object, treating pixel colour values as lighting values, adding lighting values from direct light sources.

-         Render the scene.

[Vegdahl]

 

3.2.5      Vertex Ray Tracing

Vertex ray tracing differs from ray tracing in that instead of tracing rays from every point on an object, the algorithm only trace rays from the vertices.  If the number of polygons in the scene is reasonably limited, this can speed up the ray-tracing process to the point that it becomes real time.

 

 

 

3.3      Systems Chosen

 

We implement two solutions. The first is chosen as a control, and the second is chosen as one that offers the greatest capacity for us to innovatively solve the problem of multiple light sources.

 

As virtualised light sources is the most common solution, we implement it as a standard for comparison. This system automatically handles intensity, shading and highlights, and simply needs to be modified to include shadowing. 

 

We also chose an application-computing solution to generate high-quality images, as this solution offers the most scope for innovation. The intensity and shading can be handled with an appropriate lighting model. A modified hidden surface removal algorithm is chosen as the most elegant method of shadowing. The z-buffer algorithm and the scan-line algorithm need to be supported in hardware to be effective, and are therefore inappropriate. Painter’s algorithm is finally decided on as being potentially faster than area subdivision algorithms. The application-computing system is therefore based on a BSP Tree structure in order to shadow using this method.

 

Our decision to ignore lightmaps, real-time radiosity and vertex ray tracing is largely based on the inflexibility of the algorithms. These algorithms offer little in the way of creative development, and we felt it more relevant to the development of computer graphics to focus our attention on a relatively uncharted solution.

 

 

3.4      Design Elements Common to both Systems

 

We now discuss the design of the common elements within the VLS system and the custom application-computing (BSP) solution.

 

The graphic primitives, and the techniques of storing them, are common to both of the systems chosen and are briefly detailed here. We have categorized these common elements into three parts - primitives, subdividing polygons and storing polygon meshes. We now discuss these in greater detail.

 

3.4.1      Primitives

The primitives used in the systems compose of:

-         Geometrical structures: Point3D, Vector3D, Plane, and Cone, explored in detail in section 2-1 above.

-         Storage structures: OneVertex, Triangle, Colour, and LightDescription, explored below.

3.4.1.1       OneVertex

 

Figure 31 OneVertex Class Diagram

This simply stores the position and the colour of a vertex.

 

3.4.1.2       Triangle

 

Figure 32 Triangle Class Diagram

This is an index to the vertices making up a triangle – see section 3.4.3 below.


 

3.4.1.3       Colour

 

Figure 33 Colour Class Diagram

This stores the r,g,b, and a values that specify a colour, and provides some additional functionality.

 

3.4.1.4       LightDescription

 

Figure 34 LightDescription Class Diagram

This contains some values that are chosen to reflect the nature of spotlights.

 

3.4.2      Subdividing Polygons

One polygon may have very different light patterns across its full area, and thus to render a more accurate scene, the polygons may have to be subdivided into segments.

 


 

 

 

 

 

 

 

 

 


Figure 35 Subdividing Polygons

 

To create segments:

 

            If (polygon is a triangle)

                        Add to segment list

            Else

                        // Divide polygon into triangles

                        For i= 0 to number of vertices –2

                                    New triangle has vertices 0, i+1, i+2                      

                                    Add triangle to segment list

While (still subdividing triangles)

                        For all triangles

                                    If area > maximum area

                                                // Subdivide triangle – see figure 3-5

                                                Original vertices  = one,two,three

                                                Halfway points between vertices = four, five, six

                                                Triangle 1 has vertices one, four, five

                                                Triangle 2 has vertices two, four, six

                                                Triangle 1 has vertices three, six, five                                                                     Triangle 1 has vertices four, five, six

                                                Add triangles to segment list

 

The colour information also has to be carried across, and scaled to reflect the position of the new vertices. By converting all polygons into triangles, splitting polygons becomes simpler, and the definition of points is automatically cyclic, convex and planar.

 

3.4.3      Storing a Polygon Mesh

The aim of any storage structure is to maximise speed of access and minimise space taken.

 

Many different formats exist for the storage of 3D objects. The simplest, like the POVray RAW triangle format, store the position of every vertex in every triangle [Anger]. However, the vast majority of structures store vertices separately, and then reference them when specifying the triangle. Describing polygons as an index into a vertex list has a number of advantages:

1.                  Less memory is used.

2.                  Vertex transformations occur only once.

3.                  Shared vertices are easier to find.

This is the representation used by OFF, VRML and other 3D formats [Bangay, pg 233].

 

Most structures include some colour or texture information, support other primitives (polygons, lines), and may also include layer information, moveable parts, and so on. Most of this provided functionality that is superfluous for these lighting systems – the systems are restricted to static scenes. Triangles are indicated for other reasons mentioned above, as well as simplicity of manipulating them in this format. The format that is therefore chosen for the segment list is:

A vector of triangles:

-                     Three integer values indexing the appropriate vertices

A vector of vertices:

-                     The position and colour information of that vertex.

 

 

 

3.5      Structure

 

This section shows the abstract structure of our system. We outline the major classes and their interaction.

 

The object model of a multiple lighting solution reflects the physical objects in the scene – the lights and the polygons. Figure 3-6 shows a simplified diagram indicating the various classes used to render a scene – LightSceneVisualRepresentation renders the LightScene. The LightScene is composed of lights and polygons. The polygons have been subdivided into segments that are stored as a vector of vertices and a vector of triangles.

 

                                Figure 36 Generic Class Structure

 

 

3.6      Review

 

If all local and global lighting effects are implemented, the resulting solution would be far from real-time. However, if we only consider matt opaque objects, then we only need to consider intensity, shading and shadowing to create a satisfying scene. Many possible solutions exist to render multiple light source scenes, of which virtualised light sources is the most commonly used. We chose this solution to implement as a standard of comparison. We also chose an application-computing solution to generate high quality images. This system is based on BSP Trees in order to facilitate shadowing. Some design elements are common to both systems – the primitives used and the methods for subdividing polygons. An object model for any multiple light source solution shows a light scene as an aggregate of lights and polygons, subdivided into triangles.

This concludes our discussion of the problem of modelling multiple light sources and our approach to its solution. We have outlined the elements common to the systems we chose to implement, and discussed the structure of the test program, also common to both solutions.

 

In the following two chapters, we explore in detail each of our selected implementations. We start with the standard solution (VLS) and continue with the custom-application solution (BSP).


 

4         Chapter 4: Virtualised Light System

 

In this chapter we discuss the implementation of a Virtualised Light System (VLS). This algorithm is a per-object calculation of the eight brightest or closest lights. We give a more detailed overview of the technique, and discuss standard algorithms used to implement this technique. We explain adaptations we have made to the system to handle variable intensities and we discuss how to add shadows. Following this we give a synopsis of the final system, discussing the various problems we encountered, and conclude with possible extensions to the system.

 

 

4.1      Overview

 

The steps involved in creating a scene using virtualised light sources are outlined below.

 

During set-up:

-         Enter all polygons and lights.

-         Divide the polygons into segments.

-         For each segment, calculate the eight closest lights that can illuminate that surface.

 

At run time:

-         For each segment, enable the relevant lights with the corresponding intensity value.

 

The open-GL supported z-buffer is used for hidden surface removal, the open-GL lighting model for intensity and shading (described in section 4.3.1), and shadow feelers (a ray cast from the point to the light) for determining shadowing.

 

 

4.2      Standard Algorithms

 

The eight closest lights are calculated prior to the actual running of the system.

 

            For all segments

                        For all lights

                                    Find distance from segment to light

                        Until eight lights found

                                    Find closest remaining light

                                    If not shadowed

                                                Set as next closest light

 

The triangle class is extended to index the eight closest lights to the segment, substantially increasing the size of this structure.

 

 

4.3      Adapting to Handle Variable Intensities

 

Because intensity values are variable, it is impossible to determine which are the brightest lights at a point in the pre-calculation. Therefore, we simply determined the eight closest lights.

 

4.3.1      Light Model

The method used by OpenGl defines:

 

Vertex colour =            light emitted + global ambient scaled by ambient property + ambient, diffuse and specular contributions from all light sources, attenuated as necessary.

 

The attenuation factor is a quadratic expression that expresses how rapidly light intensity falls off over distance away from the light. The contribution is further modified by a spotlight effect – the fall off over distance from the centre of the light beam. [Woo et al., pg 211-241]

 

 

4.4      Adding Shadows

 

When calculating the eight closest lights, the light is checked to see if it is visible from that segment.

 

To determine if the segment is shadowed:

 

            For all other polygons in the scene

                        Generate the plane of the other polygon

Find the intersection between the plane and the line joining the point to the light

If the point is within the polygon

            The polygon is shadowed.

 

By not considering those lights that are not visible to the polygon, the light will not be activated when rendering segments that are shadowed. This technique is inaccurate in that it treats the segment as a point rather than an area, and therefore a segment that is actually partly shadowed will either be rendered as fully lit or fully shadowed. However, the quality of the resulting image can be improved increasing the number of segments in an object.

 

4.4.1      Polygon Lies Between

When a shadow feeler is cast from the segment to the light, we need to test every other polygon to see whether it lies between the segment and the light source. After we have determined the intersection of the feeler and the plane of the polygon, we need to find out whether this point lies within the polygon.


 

 

 

 

 

 

 

 

 

 

 

 


Figure 41 Jordan Curve Theorem

 

To determine if the intersection point lies within the polygon:

            Find the largest component of the polygon

Project to the plane of the other two components

Place point at origin

For each edge

If edge crosses the positive axis of the second component 

// That is, if the two end points lie on either side of the axis

            Increment count of intersections

            If intersections =1

                        The point lies within the polygon

            Else

                        The point lies outside the polygon

 

In general, a point is within a polygon if the number of intersections is odd – by the Jordan curve theorem (see figure 4-1 above) [Bangay, pg  167]. In this system, however, polygons are restricted to being concave, so only one intersection is possible if the point is within the polygon.

 

 

4.5      Final System

 

Our final system implements the algorithm to calculate the eight closest lights with the adaptation for variable intensities and shadowing. We now discuss some of the more important problems encountered in creating this system, and examine the resulting class structure of this solution. 

 

4.5.1      Problems Encountered

 

The VLS system is relatively simple to implement, and translated well from theory.  It took some trial and error to convert the LightDescription to the appropriate Open-GL equivalents. The most problematic part is to calculate the fall-off – this is eventually approximated as a scaled ratio of the null radius and the full radius.

 

A further problem is that at run-time, random segments light up at full intensity or go black for a frame. This was never tracked down, but as the pattern altered every frame, it was assumed the problem did not lie within the application code.

4.5.2      Class Diagram

Figure 42 VLS Class Structure

 

4.6      Possible Extensions

 

As it stands, the system will already produce specular highlights.

 

Transparency, without refraction, would simply involve making use of the alpha values already supported in Open-GL. The object would, however, still cast a full shadow.  Adding refraction could be done by blending the appropriate colour values of the polygons that are seen through the object with the values of the object itself at set-up time.

 

Reflection could be handled with a simplified ray-tracing algorithm – a ray is shot out from every segment and tested to see whether it intersects another polygon. The colour values could then be blended at the appropriate ratio. For refraction and reflection, the viewpoint must be known at start-up.

 

 

4.7      Review

 

The VLS system extends the segment list to include information about the eight closest lights. Shadow feelers – a ray that determines whether any polygons lie between the point and the light source – are used to exclude lights that do not light that segment. The system is generally easy to implement, although some unexplained hardware errors occurred. It could possibly be extended to include refraction and reflection, at the cost of view independence.

 

 

This concludes our discussion of the VLS. We move on to our custom-application or BSP system.

5         Chapter 5: BSP System

 

The Binary Space Partition (BSP) System algorithm is the custom solution we chose to implement because of the wealth of possibilities available in its customisation.

 

We follow much the same approach in this discussion as we followed with the VLS solution. We start by giving an overview of our solution, and discuss the standard algorithms involved. Following that, we discuss adapting our solution to handle variable intensities and consider how to add shadows. Additionally, we discuss some time optimisations. In our discussion of the final system we consider the problems we encountered and outline the class structure. Lastly, we discuss possible extensions.

 

 

5.1      Overview

 

The steps involved in creating a scene using Binary Space Partition Trees are outlined below.

 

During set-up:

-         Enter all polygons and lights.

-         Sort polygons into a BSP Tree.

-         Divide polygons into segments.

-         For each light, clone the BSP Tree:

o       Modify colour values to reflect effect of that light at full intensity.

o       Use Painter’s algorithm to indicate shadows, and modify colour values.

 

At run time:

-         Scale light BSP Trees by light intensities.

-         Combine all light trees and render.

 

The lighting model used is described in 5.3.1, a BSP Tree is used for hidden surface removal, and painter’s algorithm used to find shadows.

 

 

5.2      Standard Algorithms

 

The most substantial data structure used in the implementation of this system is the Binary Space Partition (BSP) tree. The BSP Tree is a hierarchical structure of recursive subdivisions. A three dimensional space is divided into two by a plane (a two dimensional space by a line, etc.) and the resulting tree is a powerful sorting and classification structure.

 

BSP Trees provide an elegant method for hidden surface removal. Polygons can be returned in a back to front order to be used in a Painter’s algorithm (see section 2.2.2 above). Polygons can also be returned in a front-to-back order for scan-line algorithms (each pixel may only be written to once). BSP Trees can also be used to compute analytic visibility, accelerate ray tracing, perform collision detection and compute shadows.  [Wade et al.]

 

5.2.1      Constructing a BSP Tree

Constructing a BSP Tree consists of

-                     Selecting a partitioning plane

-                     Partitioning the polygons with the plane

-                     Recursing with both sets.

 

The stopping point for recursion is application specific, either to a maximum depth, or a certain maximum number of polygons in each leaf node. [Wade et al.]

 

For this system, the first polygon of the set has been used as the partitioning plane. For the most effective use of the tree, the end point is chosen as when all polygons are within internal nodes.

 

To construct a tree:

Initialise a node with the first polygon as the partitioning plane

Add first polygon to coincident polygon list

For all remaining polygons

If polygon coincident

Add to coincident polygons list

If polygon completely in front of plane

Add to front list

If polygon completely behind

Add to back list

If polygon spans plane

Split polygon

Add the front piece to the front list

Add the back piece to the back list.

Create front node with front list

Create back node with back list.

 

This algorithm does a large amount of list handling, and ignores the end case. In practice, it is easier to create a node with only the partitioning polygon, and default front and back nodes to null. Each subsequent polygon is then added to the tree in a separate method.

 

To split the polygon across the plane:

For all vertices

If point in front of the plane

Add to front piece

Else

Add to back piece

If next point and current on opposite sides of the plane

Find intersection of the line joining current and next with the plane.

Add to front piece

Add to back piece

 

If the original polygon is specified cyclically, then so will the two resulting two polygons.

                       

 

5.2.2      Traversing a BSP Tree

Polygons can be returned in a back to front or front to back order from a certain viewpoint. The following algorithm is for returning polygons in a back to front order:

 

If the viewpoint is in front of the plane

Return polygons from back node

Return polygons from this node

Return polygons from front node

Else

Return polygons from front node

Return polygons from this node

Return polygons from back node

 

The algorithm for front to back order simply reverses this if statement.

 

 

5.3      Adapting to Handle Variable Intensities

 

5.3.1      Lighting Model

A very simple lighting model is set up to light the system. Lights are assumed to be spotlights, defined by the position, the direction, two angles – the radius within which the light is at full intensity, and the radius without which the light is at null intensity – and the colour.

 

The lighting equation:

If point outside the null cone

intensity = 0

If point within the null cone, but outside of the full cone

intensity = angle between vertex and null edge / angle between full edge and null edge

If point within full cone

intensity =1

L = vector from vertex to light source

N = normal of the surface

Dist = distance between point and light source

Modifier = intensity *reflection coefficient* n Dot L / (distance/arbitrary constant).

 

5.3.2      Light BSP Tree

The BSP Tree that has been generated for the scene is cloned for every light in the scene. The effect of this light at full intensity on each segment in the scene is calculated and stored in the tree. Shadows are then added (see below). At run time, this tree is then scaled by the intensity of the light. All light trees are then combined, and the scene is rendered.

 

 

5.4      Adding Shadows

 

The next step in our implementation is the inclusion of shadows. Our implementation is outlined as follows: the polygons are retrieved from the BSP Tree in a back to front order, with the light treated as the viewpoint. Each polygon is then projected from the light to the plane of each polygon behind it. The overlap indicates the shadow cast by the front polygon on the back polygon. The colour values of the segments within this overlap are set to zero, for that particular light BSP Tree.

 

5.4.1      Project To Plane

Various algorithms were attempted for projecting the polygon to the plane of the shadowed polygon. The first attempt makes extensive use of quaternions. The area shadowed is approximated as the area covered by dropping the polygon perpendicular to the target plane:

 

Move polygon such that target point becomes the origin

For each point in original polygon

Rotate point such that target plane becomes parallel to x-y plane // using quaternions

Project point onto x-y plane (set z values to zero)

Rotate back

Move polygon back

 

The second attempt produced exactly the same results, but some-what more elegantly:

 

For each point on original polygon

Generate ray from the point along the polygon normal

Find intersection between ray and target plane

Set corresponding point of target polygon as intersection

 

This algorithm, however, can be easily modified to eliminate the approximation. By generating the ray from the point to the light source (instead of the ray along the polygon normal) the projection can be made accurately.  This is the final algorithm used in the system.

 

5.4.2      Finding Overlap Vector

The polygon being shadowed is passed the area that is shadowed. It then iterates through each segment, testing whether that segment falls within the shadowed area (in fact, each vertex is tested separately). Originally, an attempt was made to pass only the overlap between the projected polygon and the shadowed polygon. The major difficulty occurred in trying to generate an overlap polygon that was cyclic in definition. The algorithm as it stood before it was finally abandoned was:

 

For each edge (a, b) of first polygon

For each edge (c,d) of second polygon

If the two lines intersect and at least one point lies within the other polygon

If one point of each line lies within the other polygon //that is, corners overlap

If a not in overlap vector

add a

Else if b not in overlap vector

add b

If intersection not in overlap vector

add intersection

If c not in overlap vector

add c

Else if d not in overlap vector

add d.

Else if both a and b lie within other polygon

 

And so on for another hundred lines, but it still did not result in a cyclic definition of points in all cases. However, it was found that since the algorithm to create shadows tests only points that are within that polygon, if the entire shadowing polygon is passed, the same effect is achieved.

 

 

5.5      Some Time Optimisations

 

5.5.1      Converting Light BSP Trees into PolygonLists

Rather than storing and transferring the full BSP Tree for each light, each light BSP Tree is immediately converted to a vector of polygons in a predetermined order. This list takes less time to pass between classes than the full BSP Tree.

 

5.5.2      Fast Clone

When the polygon is scaled by the intensity of the light, it alters the values of the polygon. The polygon must therefore be cloned as many times as there are active lights every frame. This is a very time intensive process, and therefore a modified clone technique is used that provided the required functionality but does not take as long.

 

The following modifications are used:

1.                  Vertex positions of polygon were not cloned, but passed as references.

2.                  In the segment list, the elements in the triangle list are passed as references.

3.                  In the segment list, in the elements of the vertex list, positions are passed as references.

 

In effect, only the colour information of each polygon is in fact cloned. This speeds up the process significantly.

 

5.5.3      Not Calculating below an Intensity

Lights whose intensities are very low (taken as less than 5%) will have no significant effect on the scene. By not scaling and adding the light BSP Tree for these lights, the frame rate of a sparsely lit scene is dramatically improved. Obviously, if all lights are in use, no speed up occurs.

 

 

5.6      Final System

 

5.6.1      Problems Encountered

When a polygon is passed to a node, we first have to check whether the node equals null and has to be initialised, or if it already exists. Because polygons are passed down frequently in various parts of the construction code, this functionality is removed to a separate method. In the interests of code protection, however, Java does not allow direct manipulation of pointers. A pointer passed as a parameter cannot be changed. A work around involving an integer key and a switch statement had to be used.

 

Because of floating point inaccuracies, polygons that just touch are sometimes subdivided into the full polygon and a second polygon of zero width. Also, polygons that are projected to another plane and then back again are slightly misplaced. A partial solution to this is to limit the accuracy to significantly below the available accuracy. All double values are rounded off to four decimal places where appropriate, and if the point is very close to the plane, it is considered to be in the plane.

 

Java passes all values by reference rather than by value. Therefore, in the instances where values have to be changed temporarily, it is necessary to pass a clone of the object.  To implement the clone class, a new object has to be created, and then the values of that class have to be entered into the new object. Of course, these values themselves need to be cloned – only simple primitives such as integers and doubles are passed by value. Some care had to be taken for each ‘layer’ to ensure it is cloned correctly. We need to create an object, clone it, change the values of the clone, and then compare the two to ensure the original remains unchanged.

 

The system is very memory intensive, and cannot handle very complex scenes on an average user computer. The original goal of one hundred lights cannot be met. However, we have successfully rendered a simple scene with up to fifty lights.


5.6.2      Class Diagram

Figure 51 BSP Class Structure

5.7      Possible Extensions

 

To implement the other lighting effects, the view-independent nature of the system must be sacrificed. If the position of the viewer is passed to the segment list in the method initialiseLitSegments, the lighting model could easily be adjusted to handle specular highlights.

 

Transparency, without any refraction, can be handled by adding alpha values to the Colour class. The shadowing method could be modified to allow for partial shadows – instead of reducing those values to zero, shadowed segments could be scaled by an appropriate factor. To create refraction, the polygons behind the transparent object (from the point of view of the user) would have to be shifted by the appropriate amount and then blended with the object colour. This would have to be done after all the polygons are entered into the scene, but before lighting is considered.

 

 

 

 

 

 

 

 

 


Figure 52 Reflection

 

Reflections could be handled in a similar way to shadowing. Along the correct angle (see figure 5-2), polygons could be projected onto the plane of the reflective polygon. Those that overlap can be blended with the existing polygon at a suitable ratio, depending on the reflectiveness of the material.

 

 

5.8      Review

 

The BSP System uses a BSP structure to sort the polygons, and then uses Painter’s algorithm to set the appropriate segments as shadowed. Some time optimisations are used to speed up the system, such as a modified clone method and not considering low intensity lights. Some complications were found due to the restrictions and limitations of Java, but could generally be worked around. The system could possibly be extended to include highlights, refraction and reflection, at the cost of view-independence.

 

This concludes the discussion of the implementation of the two systems we chose to implement, VLS and BSP Systems. We now take a step back from these low level discussions and critically consider the results of our project.


 

6         Chapter 6: Results

 

This chapter discusses the results of our work on this project. We contrast our two systems when we investigate the success of the shading and shadowing techniques employed. We then evaluate the time taken to generate render a scene, and give of few examples of scenes rendered by the BSP system.

 

 

6.1      Shading

 

To test whether different faces of a surface are rendered correctly, both systems were tested with a box. To avoid floating point errors, the box floats slightly above the surface of the plane, rather than sitting directly on top of it.


 

6.1.1      VLS

 

 

 

Figure 61 VLS Shading

 

 

 

 

The box in figure 6-1 is rendered correctly – surfaces perpendicular to the direction of the light (the floor and the top of the box) are brightly lit, while the surface parallel to the direction of the light (the front face of the box) is dimly lit.


 

6.1.2      BSP

 

 

Figure 62 BSP Shading

 

 

 

In figure 6-2, we can see that the box is rendered correctly, in the same way as the VLS system, as expected.

 

 

6.2      Shadowing

 

This is a scene composing simply of one polygon raised above a plane, with different coloured light sources from different directions. It is used to test whether shadowing works correctly.

6.2.1      VLS

 

 

 

 

Figure 63 VLS Without Shadows

 

 

 

 

First, what the scene looks like without shadowing. In figure 6-3 we can see that areas of low intensity lighting not rendered correctly – it is clear which segments have not employed which lights, for example, on the top of the polygon.

 

 

 

 

 

 

 

 

 

 

Figure 64 VLS With Shadows

 

 

 

 

In figure 6-4 shadowing is in evidence – the full shadow slanting to the left and the blue shadow slanting to the right. Only a faint pink shadow occurs because few of the segments in that area that are not shadowed employ the blue light anyway. The shadow is irregular, but the quality (at the cost of extra time) could be improved by segmenting the polygons further.


 

6.2.2      BSP

 

 

 

Figure 65 BSP Without Shadows

 

 

 

 

Again, we demonstrate what the scene looks like without shadows. In figure 6-5 we can see lighting is smooth, with low intensity areas lit correctly.

 

 

 

 

 

 

 

 

 

 

 

Figure 66 BSP With Shadows

 

 

 

 

Shadowing occurs correctly in figure 6-6 – the full shadow slanting left, the blue shadow slanting right, and the pink shadow slanting forwards. Edges are smooth and sharp, as desired.

 


 

6.3      Time Taken

 

In this section we discuss the time needed to render a scene. Our discussion starts with the time taken to initialise the system, and progresses to discuss the frame rate achieved by our systems

 

6.3.1      Start-up Time

The VLS System takes ten seconds to start-up. The BSP System takes three minutes, six seconds to start up, which is more than eighteen times longer than the VLS System. The VLS system may be altered with reasonable time costs, but changing the scene in the BSP system is prohibitively expensive.

 

6.3.2      Frame Rate


                                     

Figure 67 Frame Rate

 

The VLS Systems advances fairly regularly – lights 5 and 14 are the two centre lights that affect proportionally more segments than the other lights, explaining the jumps. It takes approximately 0.12 extra seconds for each extra light. By extrapolation, 100 lights would take 12.5 seconds per frame.

 

The BSP System starts at a higher frame rate than the VLS system, but is more affected by an increasing number of lights. It takes approximately 0.15 extra seconds for each extra light. 100 lights would take 15 seconds per frame.

 

Both time dependencies are roughly linear, which is what is expected.

 


 

6.4      Some BSP Scenes

 

 

 

 

Figure 68 BSP Intersecting Polygons

 

 

 

 

Figure 6-8 demonstrates intersecting polygons. The polygons are lit by ten sidelights, blue from the left and purple from the right.

 

 

 

 

 

 

 

 

 

Figure 69 BSP Multi-Coloured Polygons

 

Figure 6-9 demonstrates a multi-coloured polygon lying at a slant. It is lit by twenty-five white lights from directly above.

 

 

6.5      Review

 

Both systems handle the requirements of intensity and shading correctly. Shadowing is handled with limited success by the VLS system, and with more success by the BSP system. The start-up time of the VLS System is significantly less than that of the BSP System, but their frame rates were of comparable orders.

 

Now that we have completed our discussion on the findings of this project, we draw some conclusions about the project.


7         Chapter 7: Conclusion

 

 

7.1      Rendering a Lit Scene

 

To represent a 3-D scene, the objects must be placed in their correct positions, rendered in the correct order to a two-dimensional screen, and lit correctly. There are six major categories of lighting effects – the intensity of a light, shading, highlights, shadowing, refractions and reflections. A full solution would implement all these effects, but if only matt, opaque objects are considered, then highlights, refractions and reflections can be ignored.

 

 

7.2      Multiple Light Source Solutions

 

The range of solutions to these problems is vast. There is a trade-off not only in quality versus speed, but also in the type of quality – real-time radiosity offers colour bleeding but no specular highlights, light maps potentially offer both, but no reflections or refractions. Some solutions, such as light maps and the BSP solution described here, are static. Virtualised light sources is the most commonly used solution and is implemented as a standard for comparison. A better quality image can be created with an application-computing, and a system using Binary Space Partition Trees to create shadows is implemented.  The optimum solution will clearly be application specific.

 

 

7.3      Comparison of Systems Implemented

 

In comparing the two systems, we are presented with the classic quality versus speed trade-off. The BSP system provides better quality and more accurate shadowing, but the VLS system is marginally faster. Neither system is truly real-time, but a few seconds to render a scene would not be too much of a delay for a user whose only other option is ray-tracing.

 

 

7.4      Future Research

 

Future projects could implement the multiple light solutions not investigated here - lightmaps, real-time radiosity and vertex ray tracing.

 

The current systems could be expanded to deal with the other lighting concerns – specular highlights, semi-transparent objects and reflections, as described in sections 4.6 and 5.7.


References

 

[Andrews]

Andrews, Jeff. Light Based Tessellation: Application Note http://www.gamedev.net/reference/articles/article1504.asp.

[Anger]

Anger, Steve. POVray raw triangle format http://astronomy.swin.edu.au/~pbourke/geomformats/pov/povraw.html, 1996.

[Bangay]

Bangay, Shaun Computer Graphics Course notes, 2001.

[Hall]

 Hall, Roy Illumination and Color in Computer Generated Imagery. New York: Springer-Verlag, 1988.

[Hearn & Baker]

Hearn, D. & Baker, M.P. Computer Graphics. New Jersey: Prentice-Hall, 1986.

[Hill]

Hill, F.S. Computer Graphics. New York: Macmillan Publishing Company, 1990.

[Johansson]

Johansson, Tobias Lightmaps http://polygone.flipcode.com/tut_lightmap.htm, 2000.

[Kilgard]

Kilgard, Mark Virtualized OpenGL Light Sources for Efficient Multi-Light Source Lighting http://www.opengl.org/developers/code/mjktips/VirtualizedLights/VirtualizedLights.html.

[Vegdahl]

Vegdahl, Realtime Radiosity http://www.gamedev.net/reference/programming/features/rtradiosity/

[Wade et al.]

Wade et al., Binary Space Partitioning Trees FAQ http://www.faqs.org/faqs/graphics/bsptree-faq.

[Watt & Watt]

Watt, Alan & Watt, Mark. Advanced Animation and Rendering Techniques: Theory and Practice. New York: ACM Press, 1992.

[Watt]

Watt, Alan. Fundamentals of Three-Dimensional Computer Graphics. London:  Addison-Wesley Publishing Company,1989.

[Woo et al.]

Woo, M., Neider, J., Davis, T. & Shreiner,D.  OpenGL Programming Guide: the Official Guide to Learning OpenGl,Version 1.2 , 3rd Ed. Indianapolis: Silicon Grahics, Inc., 1999.

 

Appendix A : Frame Rate Data

 

Frame Rate

Number of Lights

VLS System / [s]

BSP System / [s]

1

0.7

0.4

2

0.8

0.5

3

0.9

0.6

4

1

0.9

5

1.3

1.1

6

1.4

1.2

7

1.5

1.4

8

1.6

1.5

9

1.7

1.6

10

1.8

1.8

11

1.9

2

12

2

2.1

13

2.1

2.2

14

2.3

2.4

15

2.4

2.5

16

2.6

2.7

17

2.7

2.9

18

2.8

3