VRPhysicsEnvironment – A Framework For Collision Detection and Physical Modelling in a Virtual Environment

submitted in partial fulfilment of requirements of degree Bachelor of Science (Honours)

by Colin Dembovsky


Abstract

Physical modelling in Virtual Reality is a new and expanding field. Most such applications are either graphically rich but present little usability and have poor object interactions or are physically correct but have little visual appeal.

We design a framework, VRPhysicsEnvironment, that is at once graphically rich, has a wealth of object interactions and physical correctness.

In order to achieve this framework, we identify three major components to such a project: collision detection, collision handling and a hierarchy of virtual objects.

After presenting the current research that exists on these topics, we go on to show how we use a current algorithm to implement fast collision detection using oriented bounding boxes. We enclose each object inside a box of arbitrary orientation. By searching for planes which separate these boxes (or lack of these planes) we can determine quickly when two boxes collide, and hence the objects that these boxes enclose.

To add an object to the environment, we must also add an interaction function for this object and every other existing object. Realising that this makes extending the environment slow and arduous, we create a generalised collision handling function that is independent of the object type – hence adding objects no longer requires the addition of these object interaction functions.

The generalised collision handling function handles all collisions as changes in velocity and force of the two objects involved. By exchanging momentum and force to one another, it is possible to model collisions accurately.

We create an application that allows users to place objects in their initial states and then run and watch the simulation in real time. However, the user has no control over any of the objects during the simulation – for this purpose we create a second application – virtual table tennis.

We add two user controlled objects to the environment – a table tennis bat and a hand.

After documenting the algorithms and implementations we use, we highlight some of the major problems inherent in the system, such as difficulty in modelling continuous events in discrete time intervals and the impact this has on the system. We also show areas in which the system can be improved and extended upon in future work.


CR categories: I.6.8 [Computer Graphics]: Types of Simulation - Discrete event; I.3.5 [Computer Graphics]: Computational Geometry and Object Modelling - Physically based modelling

Keywords: collision detection, generic collision handling, physical modelling, oriented bounding box,


Acknowledgements

Shaun Bangay – supervisor and mentor; thanks for everything – especially reading this dissertation so often!

Holger Winnemöller – thanks for all the discussions and the late nights at the lab…

Matt Mundell – thanks for all the criticism…

VRSIG – thanks to all you guys for the great ideas (especially table tennis!)


Table of Contents

1.Introduction 10

1.Physical Modelling 10

2.Document Structure 11

3.Literature Survey - Related Work 13

1.Bounding Volumes 13

1.Bounding Spheres 14

2.Axis Aligned Bounding Boxes (AABB’s) 15

3. Oriented Bounding Boxes (OBB’s) 16

2.Collision Detection Algorithms and Packages 17

1.RAPID 17

2.PQP 18

3.I-Collide 18

4.V-Collide 19

5.SOLID 19

6.Choosing the most suitable package 20

3.Physical Simulation 20

1.AERO 21

1.Collision detection in AERO 22

2.Collision handling in AERO 22

3.Gravity and Air Friction in AERO 23

4.AERO and this Project 23

2.Isaac 24

1.Isaac’s Architecture 24

1.The Simulation Core 24

2.Dynamics 25

3.Geometry 26

4.Control 27

4.Other Animation Software 27

1.JACK 28

5.CoRgi 28

1.CoRgi Hierarchy 28

6.Summary 29

4.Collision Detection 30

1.A First Attempt 30

2.Box or Sphere? 31

3.Fitting a Box to an Object 32

4.The Old CoRgi 32

5.Collision Detection in the New CoRgi 32

1.The Theory 33

2.The Practice 36

6.Summary 37

5.Motion 38

1.The Flow of Events in the Environment 38

2.The Thread Routine 39

1.The Algorithm 39

3.Dynamics 40

1.The Algorithm 41

4.Kinematics 41

1.The Algorithm 42

5.Wrapping Up the Motion Code 43

6.Summary 44

6.VRPhysicsEntities – A Taxonomy 45

1.Object Orientation 45

2.Virtual Objects 45

3.The Object Hierarchy 46

4.VRPhysicsEntity – the Base Class 49

1.The Attributes 50

2.Creating Objects 50

5.VRPhysicsWall 51

6.VRPhysicsBall 51

7.VRPhysicsConveyor 51

8.VRPhysicsAGPad 51

9.VRPhysicsEnvironment 51

1.Inside the VRPhysicsEnvironment 52

10. The Applications 52

11. Summary 53

7.Collision Handling 54

1.Generalisation 54

2.The Algorithm 55

3.GetMomentum 57

1.VRPhysicsAGPad and VRPhysicsWall 57

2.VRPhysicsBall 57

3.VRPhysicsConveyor 57

4.GiveMomentum 58

1.VRPhysicsBall 59

2.The Rest 59

5.GetPhysForce 60

6.GivePhysForce 60

7.Summary 60

8.Virtual Table Tennis 61

1.The Scenario 61

2.VRVelocityPolhemusInputDevice 62

3.VRTTBatPhysActor 64

4.VRPhysicsBall 65

5.VRTTHand 67

6.Summary 67

10.Results - The Systems In Action 69

1.VRPhysicsApp in the Flesh 69

2.VRTTApp in the Flesh 73

3.Where VRPhysicsEnvironment succeeds 76

1.Visually appealing 76

2.Real Time 76

3.Physically correct 77

4.Summary 77

11.Problems and Extensibility 78

1.Discrete Time Intervals 78

1.The Problem 78

2.The Solutions 79

1.The Geometric Approach 79

2.The Analytic Approach 80

2.Gaining Energy 82

3.Problems With User Controlled Objects 82

1. Extensibility 83

1.Modifications 83

1.Adding New Objects 83

2.Improving Collision Detection 84

2.A GUI Addition 85

2.Summary 86

13. Conclusion 88

1.Future Work 89

14.List of Figures 90

16.References 92

18.Appendices 95

1.GetAllCollision Method (section 3.5.2) 95

2. ThreadRoutine (section 4.2.1) 96

3. Dynamics (section 4.3.1) 96

4. Kinematics (section 4.4.1) 96

5. DealWithCollision (section 6.2) 97

6. VRPhysicsConveyor::GetMomentum (section 6.3.3) 98

7. VRPhysicsBall::GiveMomentum (section 6.4.1) 98

8. VRVelocityPolhemusInputActor::HandleData (section 7.2) 99

9. VRPhysicsBall::GiveMomentum (section 7.4) 100

19.Disc Appendix 101


1.Introduction

Virtual Reality (VR) is an area of computer science that is only starting to be harnessed. It can unleash our imaginations since it allows us to create any environment we wish to. However, virtual reality is computationally incredibly intensive and advanced hardware and software are required in order to fully immerse a user into an environment. Many existing environments are graphically rich but objects in the environment do not obey simple laws such as gravity. Again, achieving such effects accurately involves large amounts of calculations. This project serves to set up a framework of objects which obey the laws of physics in a virtual environment. Solid objects should not pass through one another and objects need to ‘display mass’ by falling under gravity and exerting forces on one another. The objects need to display these characteristics in a real-time manner; thus any and all algorithms and techniques used need to be fast and robust.

We present our solution to this problem – a framework of physics entities which obey the laws of physics. We show how we implement a fast collision detection algorithm using oriented bounding boxes. Also shown is collision handling and the framework which is designed to be easily extensible.

1.Physical Modelling

Physical modelling is a rapidly growing field. Many large and complicated systems can now be accurately modelled using computers. This modelling allows researchers to more fully understand and test systems. For instance, physical modelling is used extensively in such diverse fields as computer aided surgery, heat flow modelling in the pulp and paper industry, the physical modelling of scientific instruments for the purposes of calibration and many other areas. Since computers are becoming faster and cheaper, physical modelling is becoming more real-time than ever before. This means that there is an increasing need for fast, accurate and robust techniques of modelling.

Physical modelling is being integrated more and more into virtual reality. Virtual Reality can be defined as an artificial environment simulated with computer hardware and software presented to the user in such a way that it appears and feels like the user is immersed in the environment [1]. In order for physical modelling to be useful it must be done in an environment that appears realistic to the user. For instance solid objects should not pass through one another.

There are two significant phases to physical modelling in Virtual Reality (VR) namely calculating the position of each object according to the rules of the environment and secondly rendering the scene. Each new rendering is called a frame.

This project serves as a pilot for a physical modelling environment in the CoRgi system. We have chosen to focus on the first phase of physical modelling in VR – the calculations required to position each object in each new frame.

The calculation phase of physical modelling can further be subdivided into two distinct domains – the rules governing motion (again subdivided into dynamics and kinematics) and collision detection (which includes the notion of collision handling). These form a significant portion of the work done in this project. The remainder of the work is implicit in the implementation of the environment – namely a framework of objects which behave in a clearly defined way in this physical modelling environment.

The design of this framework needs to be robust enough to allow easy extensibility. It also needs to be designed in such a way that functions such as collision detection and handling are as general as possible – that they are independent of types of objects. This adds to the ease of extensibility.

2.Document Structure

The following chapters of this project serve to show how we create the framework of objects for the modelling environment, improve the existing collision detection algorithm in the CoRgi system and how we implemente generic collision handling and motion of objects. This project is implementation focussed and as such will contain much mathematics and code. We then show how to extend the CoRgi system and what improvements could be made to the system.

Chapter Literature Survey - Related Work is a survey of current research in the field of physical modelling. Chapters 3 through 6 serve to show the design and implementation of the VRPhysicsEnvironment and how current research aids our choices and methodology. Chapter Collision Detection shows the theory and practice of the collision detection algorithm we use, while chapter 4 shows how motion is achieved in virtual reality. Chapter VRPhysicsEntities – A Taxonomy gives a rounded view of the VRPhysicsEnvironment by showing a taxonomy of the objects in the environment. This overview aids understanding of chapter Collision Handling, which shows how collision handling is implemented. Chapter Virtual Table Tennis discusses a different application of VRPhysicsEnvironment – virtual table tennis – in detail to highlight some important concepts and ideas. Chapter 8 shows some screen shots and discusses how well the final implementation of the system works. Chapter 9 then deals with possible extensions and improvements that can be made to the project and how to go about implementing them as well as known problems and limitations of the project. Chapter 10 closes this dissertation with a brief discussion of the project.

2.

3.Literature Survey - Related Work

There is much work being done in the area of physical modelling. Much of this work involves developing fast and accurate collision detection algorithms. These algorithms are varied. This chapter focuses on several different algorithms, one of which we implement in our system.

Although this project is not directly concerned with the rendering of each frame, the visual representation of each object is vital to collision detection. Usually the visual representation of objects in a VR environment is a list of three dimensional (3D) points or vertices grouped into polygons. When polygons of any two objects intersect, the objects appear to pass through one another. Hence timely and accurate detection of when vertices overlap (or are about to overlap) is the essence of collision detection.

In this chapter we discuss some key issues prevalent in this project: bounding volumes are introduced and compared; some current research in collision detection is highlighted and some physical simulation concepts are introduced.

1.Bounding Volumes

Objects in a VR often have large and complicated visual representations and in order to increase the speed of a particular collision detection algorithm the polygons are grouped and bounded by a bounding volume in order to approximate their structure. Bounding volumes can be used in two ways – either as a complete approximation in itself or else as a first test.

When used as an approximation, once two bounding volumes collide the polygons (or objects) they enclose are said to collide. When used as a first test, further tests are performed after determining that two bounding volumes have collided in order to determine which polygons within the bounding volumes (if any) have actually collided. Obviously if the bounding volumes are disjoint (are not colliding) then the polygons they enclose are necessarily disjoint.

The most commonly used bounding volumes are spheres, axis aligned bounding boxes (AABB’s) and oriented bounding boxes (OBB’s) – discretely oriented polytopes1 are complicated bounding volumes and are not considered here because of their four dimensional nature [2]. Furthermore, more than one bounding volume can be used to enclose an object – usually a tree of bounding volumes is built around each object – clearly higher numbers of bounding volumes will more tightly (or more accurately) enclose the object’s polygons. The building of these trees is in itself an interesting problem and is not considered in this project. (This project uses only one bounding volume for every object – obviously, we could more tightly enclose an object if we used more bounding volumes. Placing these bounding volumes in the correct place in order to tightly fit to an object requires a hierarchy – usually a tree. The collision detection algorithms work with individual bounding volumes. The trees of bounding volumes simply link the objects to more than one bounding volume – once two bounding volumes are found to intersect, the two objects they ‘belong to’ are colliding. Hence the trees are not an essential feature to have in the system, but can be added to improve the accuracy of the collision detection.)

1.Bounding Spheres

Bounding spheres are often chosen because of their relative simplicity – a sphere is independent of orientation2 (that is a sphere undergoing any rotation about its centre remains the same) and can be represented by a 3D position (the sphere centre) and a radius. This representation of a sphere makes the calculation of intersecting spheres relatively easy. Consider the following example3 :

Figure 1 - Two spheres, A and B, can easily be tested for intersection.

Given two spheres, A and B, we can represent them by centres (xa, ya) and (xb, yb) respectively and radii ra and rb respectively. The distance between the centres is given by . If this distance is less than the sum of the radii, then the spheres are intersecting. Hence the spheres intersect iff

2.Axis Aligned Bounding Boxes (AABB’s)

After the bounding sphere, axis aligned bounding boxes are the next simplest form of bounding volume. The boxes are usually represented by a 3D vertex for the origin of the box and three lengths - the sides of the box. All the boxes in the environment are aligned to a fundamental origin with a fixed set of axes – for instance the X, Y and Z axes. Then each side has a vector in the direction of an axis (X, Y or Z) and the length of this vector is the length of the side.

AABB’s are not independent of the orientation of the object enclosed. Consider a cylinder aligned with the y-axis. The AABB would fit the cylinder fairly well in this orientation – if the cylinder were rotated through 45 degrees, the AABB would increase in size considerably.

The calculation required to test if two AABB’s are intersecting is simple. For example1, if we have two boxes, C and D, with lengths cx and cy and dx and dy respectively from their origins (xc ; yc) and (xd ; yd) (here chosen as the bottom left corner, although this choice is arbitrary as long as it is held consistently in the system) then the test looks as follows in C++ (see figure 2):

intermediatecollision = 0; // assume no collision

if (xd < xc && xc < (xd + dx))

// then they overlap along the x-axis

if (yd < yc && yd < (yd + dy))

// then they overlap along the y-axis

intermediatecollision = 1;

Figure 2 - Two AABB's, C and D, must overlap along both axes to intersect.

Note that further tests are necessary – the code segment above tests whether the bottom left corner of box C is inside box D. Similar segments would be needed to test all four corners. Only then can a final decision be made. Figure 3 shows two AABB’s which overlap along the X-direction but not the Y-direction – hence they are disjoint.

Figure 3 - Two AABB's which overlap along only one axis do not intersect.

3. Oriented Bounding Boxes (OBB’s)

These bounding volumes are similar to AABB’s except that they are not aligned to any particular axis. Hence they need to be represented differently from AABB’s. To represent an OBB we need a 3D point for the centre, a set of box axes1 and three ‘radii’, or half-lengths, one in each direction of the box axes.

Figure 4 - Representing an OBB.

The calculation to determine intersecting OBB’s is more complicated than either of the previous two calculations and will be explained in detail in Chapter 5 since we chose this bounding volume for our collision detection algorithm.

2.Collision Detection Algorithms and Packages

The following sections describe briefly five collision detection packages considered for use in this project. And although none of the packages was used in its entirety, it is worth mentioning the key features of each application.

1.RAPID

RAPID (Robust and Accurate Polygon Interference Detection) is used by S. Gottschalk et. al. in the department of computer science at University of North Carolina as part of a paper entitled “OBBTree: A hierarchical Structure for Rapid Interference Detection” [3].

The RAPID application programming interface (API) accepts ‘polygon soups’ – it places little restriction on the way polygons are grouped (topological structure) [4]. Typical topological structures are closed objects or meshes. Topological structures have a number of characteristics and may or may not have cracks, holes, self-intersections, and non-generic (e.g. coplanar and collinear) configurations depending on the type of structure [5]. RAPID’s collision detection procedure must be called explicitly with two objects. It then returns a list of polygon pairs where each pair contains an intersecting polygon from each object. If the list is empty, the objects are disjoint.

RAPID makes use of oriented bounding boxes.

RAPID is recommended for environments which have a moderate number of objects and in which the application developer is willing to explicitly call the collision detection procedure. Since this algorithm is implemented in this project, it is examined in more detail in Chapter 5.

2.PQP

Proximity Query Package is similar to RAPID as far as API goes, but in addition to a collision detection procedure it also provides a minimum distance procedure (that computes the minimum distance between two objects) and a tolerance verification procedure (which determines whether two objects are closer together or further apart than a tolerance distance) [6]. Again, no specific topological requirements are enforced except that polygons must be triangles.

3.I-Collide

I-Collide, the predecessor of V-Collide, uses an interactive and exact collision detection algorithm for convex polyhedra1[7]. I-Collide makes use of coherence (the property of an environment to change very little between consecutive time steps). The API is almost identical to that of V-Collide and so the client application must tell I-Collide where the objects (polygons) are in world space. I-Collide maintains a list of potential contact pairs which is updated whenever the objects change [8].

4.V-Collide

V-Collide is an ‘n-body’ processor. This means that once a client application has given V-Collide the position of objects (polygons) in world space1, it uses a fast sweep-and-prune operation to determine which polygons are potentially in contact. The sweep-and-prune method projects each bounding box onto the x, y and z-axes, forming one dimensional intervals along these axes. The intervals are added to lists and the lists are sorted to determine overlapping regions. A bounding box can only be intersecting if an overlap region exists along all three axes. Making use of coherency, the lists are updated at each iteration and not calculated in each frame. For each potential contact, V-Collide calls RAPID to determine whether or not there actually is an intersection or not [9]. Hence V-Collide is suited to environments with large numbers of objects where large numbers of collisions are expected [10].


5.SOLID

The Software Library for Interference Detection has a very different API from that of the packages considered so far [11]. Objects need to be represented by primitive shapes (box, cone, cylinder, sphere), and complexes of polytopes (line segments, convex polygons, convex polyhedra). A single shape can be used to instantiate multiple objects. In other words the objects are created by SOLID rather than just passing the vertex list of existing objects as was done before. Motion is then specified by translations, rotations, and nonuniform scalings of the local coordinate system of each object. Deformations can be achieved by user defined vertex arrays.

Call back functions are used to define collision response or handling. By maintaining a set of pairs of proximate objects via a sweep-and-prune of the axis-aligned bounding boxes (the bounding volume used in SOLID), frame coherence is exploited. Further, separating axes for these pairs can be cached [12].

6.Choosing the most suitable package

For the purposes of collision detection in this project, RAPID was chosen. There are a number of reasons for this choice.

  1. RAPID’s collision detection procedure is the simplest. The collision detection required in this project was to simply determine whether or not two objects were colliding – no further information (such as which specific polygons were intersecting) is required.

  2. RAPID makes use of OBB’s. We considered OBB’s to be the most suitable choice of bounding volume for this environment since most of the objects in the environment could more tightly be enclosed by a single box than by a single sphere. OBB’s were chosen over AABB’s for accuracy and tightness when objects have undergone arbitrary rotations (which occur frequently in the environment).

  3. RAPID is most suited to environments with a moderate number of objects – and the environment we designed is one of these environments.

3.Physical Simulation

Computer animation is becoming increasingly important. In the area of mechanics, animation is being used to visualise environments which behave like real systems. Applications exist which allow such modelling. Since this project is a pilot to a complete physical modelling environment, it is useful to consider an existing physical modelling application in order to determine what features are being implemented. Two such packages are AERO and Isaac.

We use the features of AERO and Isaac to introduce some terms and concepts specific to physical simulation. We also compare them to this project in a later chapter.

1.AERO

The Animation Editor for Realistic Object motion (AERO) package is an animation system that visualises complex systems that obey the laws of physics. In AERO “a virtual world is simulated in which all defined bodies move according to the operative physical laws,”[13, page 1].

AERO operates in three modes – interactive mode, in which scenes are produced, calculated and displayed in real time, precomputed mode, in which complex scenes are calculated before they are displayed, and rendering mode which produces a sequence of scene files for an external ray trace program1. These scene files are then rendered and the simulation is viewed as a movie.

Objects in AERO need to be created in a scene editor. Various fundamental objects can be created (and combined) – spheres, cylinders, cuboids2, planes and fixed points. Objects may be connected by four types of connection – rigid connections, rods, springs, dampers and joints. The user may also specify forces such as torque and acceleration.

AERO is confined to the simulation of rigid bodies3. AERO provides rudimentary air friction. To prevent objects moving through one another, either the impulse of a collision is applied or touching forces are applied4. These forces are input into motion equations just like gravity and friction.

Objects in AERO are specified by a vector from the origin to their centre of mass and use of quaternions5 is made to specify rotations. Each object has a mass, a physical size and material characteristics. Conservation of linear and angular momentum form the basis of AERO’s equations of motion.

1.Collision detection in AERO

Collisions in AERO are detected by checking all objects in the environment for overlapping regions. All intersecting pairs of objects are placed in a list. AERO implements ten routines to handle the four primitive objects1.

Furthermore, Keller et. al. specify a collision step size, tcol which is the maximum time interval between two collision detection invocations. Since timely detection of collisions is crucial for processing collisions and physical contact, tcol must be specified carefully. Obviously, the smaller tcol is, the more computationally expensive the environment will be. Conversely, too large a value of tcol has negative consequences such as objects passing through one another (or penetrating too deeply) when the interval of their overlap is smaller than tcol. This clearly happens more frequently for larger value of tcol.

2.Collision handling in AERO

Once a collision point p has been determined, two additional vectors are calculated – the collision normal n and the collision velocity v. If v = 0, the point p is a contact point. If v < 0 then the objects are separating and a collision occurs if v > 0.

AERO provides a collision value which determines the material behavior of the colliding objects. If = 1, the collision is elastic. A partially elastic collision (in which incidental velocities v1i and v2i are distributed to the final velocities v1f and v2f2. For = 0 a non-elastic collision occurs. Collisions can also be dealt with by inserting a stiff spring at the collision point p. Friction is also taken into account in collisions.

3.Gravity and Air Friction in AERO

In order to simulate gravity, a constant force (by default g = 9.81m/s2, but the user may change this in order to simulate gravity on the surface of another planet) is applied everywhere in the negative y-direction. The user may also ‘turn off gravity’ for each object.

Air friction is a simulated by applying a constant force opposing the direction of motion of an object. The force increases with the square of the velocity of the object.

4.AERO and this Project

(Perhaps the reader would find this section more useful once they have read this entire dissertation, but logically this section fits in here.)

When comparing AERO and VRPhysicsEnvironment we see the following points:

  1. Only wire frame models may be viewed real-time in AERO. This project will allow real-time visualisation of environments including shading and texture mapping.

  2. Both applications make use of rigid bodies and the physics associated with them. VRPhysicsEnvironment does so because rigid body physics is simpler than the physics of deformable bodies.

  3. Material properties affect simulations in AERO – they have not been implemented in this project. This is because VRPhysicsEnvironment does not implement surface forces such as friction – hence all surfaces in VRPhysicsEnvironment are treated the same even if their appearances are different.

  4. Complex objects need to be made from the four primitives provided in AERO. This project allows the user to input any physical representation – no matter how complex – into the system.

  5. This project does not account for a number of features that AERO does – angular momentum and acceleration, friction, non-elastic collisions and connections.

Collisions in AERO are handled using impulse forces. Collisions in this project are handled via velocity changes.

2.Isaac

Isaac is still in its infancy, but the architecture for this system has been designed and an initial implementation has been realised. Isaac was intended to develop simulation support for virtual environments. It is a distributed simulation server that integrates multibody dynamics, geometry and control [14].

Isaac is split into five components:

  1. a simulation core that contains numerical methods and is able to robustly handle constraint changes. Collisions are examples of constraint changes, and they change the underlying equations in this core.

  2. a dynamics module that formulates the motion equations governing the objects and for interfacing with geometry to handle collision and contact dynamics.

  3. a geometry module which is responsible for collision detection and contact analysis. This module also contains a geometric database that manages the global geometric information in a virtual environment to support such operations as proximity queries.

  4. a control module that supports high level specification of motion control as well as scenario and behavioural control (for example, co-ordinating multiple agents).

  5. a task management module that allocates resources, synchronises computations and manages interprocess communication across a set of Isaac server processes.

1.Isaac’s Architecture

Each of the dynamics, geometry and control modules interact with the simulation core in terms of constraints. The basic motion equations and kinematic constraints are formulated by dynamics and handed to the simulation core. During simulation, the dynamics, control and geometry modules modify this initial equation by adding or removing equations as events occurring in the environment warrant.

1.The Simulation Core

Handling changes occurring in the environment is crucial to the design of any simulation system. Such changes are signalled in Isaac by events which are handled by changing the set of equations governing the object’s behaviour. For example, two initially disjoint objects may be governed by two independent sets of equations. If the objects come into contact (and do not simply bounce off one another) their equation sets are coupled by a new equation (representing a new kinematic constraint). When they separate, the equation set would again be modified.

The simulation core of Isaac has two major goals: to support efficient constraint changes and to support modularity and a “constraint programming” type of module interaction.

In order to support efficient constraint changes, a variety of equation solving methods are present in the simulation core. These include differential-algebraic equation solvers like MEXX [15] which allow the motions to be integrated over time and allow the simulation core to advance the simulation.

Isaac explicitly uses “constraint programming”. The set of equations which the core solves are viewed as constraints which other modules (dynamics, control and geometry) can manipulate via a simple and well defined constraint programming interface. When events occur, the constraints may be modified, added or removed by the other modules.

An event manager lies within the simulation core. Various Isaac modules can define events by specifying how each event is to be detected and how each is to be resolved. For example, an event could be triggered by a function value passing through zero or by a collision. Events may be resolved by formulating a set of equations to handle a collision or adding equations corresponding to constraint changes.

2.Dynamics

This module of Isaac formulates a set of motion equations and provides them to the simulation core. Kinematic constraints (such as physical joints) also need to have equations which this module is responsible for formulating.

When contact constraints (or temporary constraints) are present, the dynamics module interacts with the geometry module to formulate appropriate inequalities and equations. When two objects are in contact, two sets of contact constraint inequalities are used: one for the dynamic constraints and one for the geometric constraints.

For example, consider a block sliding down an inclined table. Suppose (for simplicity) that only one corner of the block is in contact with the table. Then the contact between the point of the block and the face of the table is modelled using

  1. an inequality that constrains the point to be on or above the plane of the face of the table,

  2. a force condition that says that contact remains as long as a force is exerted by the block on the table, and

  3. conditions dictating that contact only holds if the point is within the geometric bounds of the table top.

Then there are only two ways in which the contact may break: a force pulls the block upwards so that it breaks contact by lifting off the face. (This is a dynamics event – it occurs because the force applied by the block on the table cannot be maintained.) Secondly, the block could slide off the table – this is a geometric event corresponding to a violation of the third contact condition above.

This example shows why Isaac distinguishes between dynamic and geometric events. For the geometry module to decide whether or not the force condition is met is a difficult task – and hence it is left to dynamics where this task is straight forward.

3.Geometry

Isaac includes geometric support for a virtual environment in its geometry module. This module includes:

  1. the representation of the geometry of the environment,

  2. the determination of mass properties of solid, movable objects,

  3. fast collision detection, and

  4. fast contact analysis.

The geometric view makes a distinction between movable and static objects – static objects do not need to have their masses calculated or have motion equations formulated for them. Movable objects in Isaac are represented with planar polyhedra. Static objects are modelled by surfaces.

All objects in Isaac are rigid and hence all the inertia matrices1 of the objects are fixed and are precomputed.

Objects in Isaac are enclosed in a convex hull (a bounding volume) and only once objects are sufficiently close to one another is contact analysis performed.

4.Control

Isaac makes a distinction between motion control (in which typically joint torques, forces and accelerations or constraints on such properties are specified) and a more high level scenario control. Scenario control includes the co-ordinating, directing and choreographing of the activities of multiple simulated entities.

4.Other Animation Software

Most animation packages include support for animating objects. In AERO (and in this project) the objects are all rigid bodies. That is their shape is unaltered at any time during the simulation. Conversely, deformable bodies may change their shape during the simulation. For example, when a rubber ball bounces off a surface, it deforms (it is squashed) at the moment of impact and later ‘unsquashes’ as it moves away from the surface. Objects may also be animated bodies. If we wished to model a human being we would need a hierarchy of small objects which connect in different ways in order to represent a human body. A hand, for instance, can be simulated by connecting digits and a thumb to a palm by connections that allow only certain movement. The digits should not be able to touch the back of the palm. The digits and thumb may then further be constructed of smaller elements and joints. Furthermore, the idea of muscles is added to the joints so that by ‘tensing’ a muscle, the joint would move in a certain way.

Here dynamics and kinematics take on a slightly different meaning to what we have used up till now. Dynamics now refers to the forces that are needed by muscles to move parts of the body. And kinematics is now used to determine the sequence of movements that need to be performed in order for the body to reach a certain configuration.

1.JACK

JACK is a commercial product which was developed in the University of Pennsylvania. This system is designed to aid computer aided design (CAD) user’s to see if a human would fit comfortably into an environment, what field of vision he would have and if he would be able to reach and be strong enough to use controls present [16]. Each JACK (the name of the human model) has 39 body segments with 38 joints. Furthermore, the hands consist of 33 segments and 30 joints. In order to test whether or not JACK has enough strength to pull a lever, the system uses dynamics to calculate what force JACK can exert on the lever. In order for JACK to walk to the lever, kinematics would be used to allow JACK to walk to it.

The use of animated bodies is a feature which will greatly enhance the reality and use of physical modelling systems. This feature is not implemented in this project, but the framework developed would support such an addition.

5.CoRgi

CoRgi is the Rhodes University Virtual Reality repository. Most of the work done in Virtual Reality is developed in CoRgi. Since many of the applications use a sink (to display the environment) and one or more sources (input devices) classes have been created that encapsulate the use of these devices. For instance, VRSink is used to render the output to a monitor while VRStereoSink outputs the environment in stereo to a head mounted display.

CoRgi then is the basis for this project – we don’t need to worry about displaying the environment – we simply create a VRSink and it does the rendering for us. Thus we are free to work on the more central principles of VRPhysicsEnvironment – namely physical modelling.

1.CoRgi Hierarchy

Figure 9 (page 49) shows an extract of the CoRgi hierarchy. CoRgi is divided into three major categories – video, audio and virtual reality. There are other groups in CoRgi, but obviously this project falls under virtual reality.

The VR section itself is divided into a host of class files and a few applications. The classes divide into categories such as Devices, Actors, Environments, Entities and Components.

At the top of the tree is Component. Derived from this is VRComponent from which VREntity is derived. This is done to allow polymorphism. VRPhysicsEntity derives from VREntity and all the physics objects that we create derive from VRPhysicsEntity.

We create two applications for this project – vrphysicsapp and vrttapp. vrphysicsapp is used to display all of the physics objects we create – the user places the objects and then watches them interact. vrttapp is the virtual table tennis application and is used to introduce user controlled objects into the environment.

6.Summary

So far we have justified why we use oriented bounding boxes rather than any other bounding volume. We also show some current collision detection packages and compare them in order to show why we use RAPID for collision detection in this project. We show two other physical simulation software and highlight their key features. CoRgi is introduced. In the next chapter we deal with one of the central themes to this project – collision detection – and how we used current research to aid the design and implementation of the VRPhysicsEnvironment.


4.Collision Detection

In order for the environment being simulated to appear real to users, solid objects should not pass through each other. The aim of a physical modelling system is to present an environment which behaves as the real system being modelled would. Hence collision detection is a vital component of any such system.

In this chapter we show the development of our collision handling algorithm, first in theory and then in practice – or in code. We show how collision detection was done before and how our algorithm is different.

1.A First Attempt

As a fist attempt, we may check every polygon of one object against every other polygon in the environment in order to determine which polygons (and hence objects) are in contact. However, objects in virtual environments are usually composed of thousands or even hundreds of thousands of polygons. This first algorithm is simply too computationally expensive to implement.

In order to reduce the computational expense of collision detection, a number of simplifications can be made. Use is made of bounding volumes (discussed in detail in section Bounding Volumes) in order to reduce the number of comparisons – now instead of comparing polygon to polygon, we compare bounding volume to bounding volume. This is commonly referred to as a proximity test since polygons of bounded objects can only possibly be intersecting if their bounding volumes are intersecting. If we require the actual polygons which are intersecting we need only test the areas bounded by intersecting bounding volumes.

But even testing to see which bounding volumes are intersecting poses an interesting problem. In this project we use oriented bounding boxes to bound the polygons of an object – in fact, we simplify the problem even further by bounding each object with only one bounding box. This has been done to increase speed at the cost of some accuracy – round objects are not well bounded by boxes while rectangular objects (obviously) are.

2.Box or Sphere?

Figure 5 - 2D representation of a round object and a rectangular object enclosed by the opposite bounding volume.

We now show that a bounding box fits a circular object better than a bounding circle fits a rectangular object. Figure 5 shows in 2D how a box enclosed by a circle and a circle enclosed by a box would look. If the circle on the left has radius r, it has surface area . Since the radius of the circle is half the length of a side of the square around it, the area of the square is . We then give the square on the right the same surface are as the circle on the left viz. . Now the side of the square bounding the circle is . Hence the radius of the circle enclosing the square would be (using Pythagoras). Then the surface area of the circle would be = . Since this is larger than the of the square on the left and extrapolating to 3D, we conclude that a box more tightly encloses a sphere than a sphere encloses a box. This simple derivation was motivation to change CoRgi’s bounding volume from a sphere to a box.

3.Fitting a Box to an Object

In order to enclose the objects in the environment with a box, various variables were updated while reading in the .off file holding the position of all the faces and vertices of the representation of the object. Maximum and minimum values along the X, Y and Z axes were stored and the box was then constructed from these values – the centre of the box lies halfway between the maximum and minimum along each axis. The box radii are then simply the distance from the centre to the maximum (or minimum).

In order for this method to work, we asserted that the object is more or less ‘aligned’ to the X, Y and Z axes. For instance a cylinder would be ‘more aligned’ to the X,Y and Z axes if it’s length ran parallel to any one of these axes than if it’s length were at 45 degrees to an axis.

4.The Old CoRgi

Before the modifications we made to CoRgi, all objects had bounding spheres. To test for a collision, measure the distance between the centres, a, and the sum of the distance of the radii, b. Then if a > b, the objects are disjoint (they are not intersecting). The advantage of spheres is that this applies irrespective of the orientation of the object since a sphere does not change when rotated. However, as shown above, a bounding sphere is less accurate than a bounding box since collisions would be detected earlier than they should. This arises from the fact that the bounding sphere, on average, has a larger volume than the bounding box when bounding the same object.

5.Collision Detection in the New CoRgi

In order for us to detect collisions between two objects, we could test every face on the first object against every face of the second object. A more subtle test is to test for an plane of separation. If we can find a plane which is between both objects and which does not touch either object, we have found a separating plane and hence can conclude that the objects are disjoint (not intersecting). Conversely, if no such plane exists, the objects must be intersecting. Testing for separating planes is how collision detection is done in the new CoRgi.

The implementation of this test is shown in the next section – the code is adapted from RAPID (see section RAPID). However, in order to understand the implementation, the theory behind it needs to be expounded first.

1.The Theory

Gottschalk et. al. [3], the creators of RAPID, start their search for a separating plane by considering a trivial test for disjointness: project both boxes onto an arbitrary plane in space - then both boxes form an interval on this plane. This is done by extending a line from the ‘leftmost’ and ‘rightmost’ extremes of the box onto the plane such that the line is at right angles to the plane. If the intervals of the boxes do not overlap, then the plane is a separating plane. This means that the boxes could be disjoint - more tests are required. Clearly there could exist a large number of arbitrary separating planes. We need therefore to choose which planes to test carefully.

Gottschalk et. al then distinguish a finite set of planes which need to be tested. If any one of these special planes is a separating plane, then we know the boxes are disjoint. They then go on to show that there are 15 such special planes. They state that we can always separate two disjoint boxes by a plane which is parallel to a face of either box, or parallel to an edge from either box. Then the special planes are the planes orthogonal to a face from each box or orthogonal to an edge from each box. Since each box has three unique face directions and three unique edge directions, there are 15 potential separating axes - three for the faces of one box, three for the faces of the other and nine for pairwise combinations of the edge directions from both boxes.

In order to perform the test, the centres of the boxes are projected onto an axis, as well as the radii of the intervals - this is why we chose to represent the bounding boxes in the manner we did (see section Oriented Bounding Boxes (OBB’s)). If the distance between the box centres as projected onto the axis is greater than the sum of the intervals of the radii of the boxes, then we have found a separating axis and the boxes are disjoint.

Figure 6 - L is a separating axis for A and B since the projected intervals are disjoint.

In 2D, a (3D) plane is represented by a line, or axis, in the same way that a cube is represented by a rectangle. Figure 6 is used to show how the projections are done and how the calculation is set up in 2D. We have two boxes : A and B with B placed relative to A by rotation R and translation T1. The radii of A and B are ai and bi with i = 1,2,3. The axes of box A and B are denoted by Ai and Bi, again for i = 1,2,3. If box A's axes are used as a basis, then the Bi vectors are the same as the three columns of R.

We consider an example: box A has axes which is a cube with a corner at the origin. Then we rotate a box B, of the same dimensions, through 45 degrees about the z-axis. The box axes of this second box would be . The rotation matrix corresponding to this rotation is

. It can be seen that the columns of this matrix correspond to the box axes of B.

The centres of each box project onto the midpoints of their intervals. Using an axis parallel to unit vector L, then the radius of box A's interval (rA)is given by

A similar expression is obtained for rB. Since the placement of the axis is immaterial we can choose it to pass through the centre of box A. Then the distance between the two centres on the projected axis is given by .

We can then come to an inequality to test for disjointness in the two boxes: boxes A and B are disjoint iff

This can further be simplified by making L a box axis or even a cross product of box axes. For example, with 1, the second term of the first summation becomes

The last step is possible because the columns of the rotation matrix R are also the box axes of B. After all the terms are simplified in a similar manner, the inequality becomes

This inequality has so few terms since the choice of L causes some terms to reduce to zero because of cross products. Note that once a separating axis is found, we know the boxes are disjoint and no further tests are required.

2.The Practice

We obtained the source code that Gottschalk et. al. implemented. We found the method that determines if two OBB’s are disjoint. It is called obb_disjoint and requires the following arguments:

  1. b[3][3], the rotation matrix from A to B,

  2. T[3], the vector from A’s centre to B’s centre,

  3. a[3], the radii of box A and

  4. b[3], the radii of box B.

All of this information is already in the system since each object has a bounding box using the box radii format. Calculating T is easy since we know the position of each bounding box. All that remains to calculate is the rotation matrix R. However, this is not difficult to calculate since CoRgi makes use of quaternions to represent orientation. In order to calculate the rotation from A to B, the quaternion operator / is used. With the quaternions, R is given simply by Qa/Qb where Qa and Qb are the orientations of A and B respectively (well Qa/Qb gives a quaternion which can we convert easily to a matrix).

Therefore we have all the information required by the implemented function. We then add the function to the VREnvironment class and call it to test every bounding box against every other bounding box in the system (this is the GetAllCollision method). The pseudo-code for this method is as follows:


  1. current = GetTheFirstObject

  2. do

  3. currentBox = GetBoundingBox (current)

  4. currentPos = GetPositionOfBox (currentBox)

  5. next = GetNextThingAfter(current);

  6. do

  7. nextBox = GetBoundingBox (next)

  8. nextPos = GetPositionOfBox (nextBox)

  9. T = vector from nextPos to currentPos

  10. R = rotation matrix from currentBox to NextBox

  11. test for a collision using obb_disjoint (R, T, currentBoxRadii, nextBoxRadii);

  12. if (there was a collision)

  13. DealWithCollision (current, next)

  14. break

  15. next = GetNextThingAfter (next)

  16. until next is the last object

  17. current = GetNextThingAfter (current)

  18. until current is the last object


The C++ code for this algorithm is presented in appendix 13.1.

6.Summary

In this chapter we show why we use bounding boxes rather than bounding spheres. We show how we fit an oriented bounding box to an object. We discuss how we implement some of RAPID’s code to test for separating axes and hence test whether or not two objects are disjoint. If no separating axis is found, then the objects are intersecting (or colliding). Before we discuss collision handling, two major concepts need to be expounded: how motion is achieved in VR and what the objects ‘look like’ inside – their design and implementation. These are the subjects of the next two chapters.


5.Motion

Motion in Virtual Reality is achieved by small changes in an objects position over time. If the frames are fast enough (if they are real-time) the objects in the frames appear to move. This chapter deals with the algorithms used in order to move objects in the environment.

1.The Flow of Events in the Environment


Draw Scene






Detect Collisions





Calculate new positions




Figure 7 - Diagrammatic view of the flow of events in CoRgi.

Again, the drawing of the scene in each frame of the animation is not the focus of this project. However, this process has a large impact on the physical modelling system. Figure 7 shows the flow of events which occur in the system. After drawing the scene, we perform collision detection and handling. Next, forces, accelerations and velocities are used to calculate new positions for each object in the environment. Simultaneously we adjust make adjustments to forces, accelerations and velocities. These new (adjusted) values will be used in the next iteration. The scene is now drawn again and the cycle repeats.

2.The Thread Routine

In order for us to plot each object in the correct position, we must know the current time as well as the object’s velocity, acceleration and the net force acting on the object. These quantities are used in basic Newtonian mechanics to calculate the position of the object at a particular time. These calculations then form the heart of each object.

To move the object, we update the object’s attributes at the start of each frame and these values are used to render the object. We must know the current time as well as the time interval that has passed since it was last called. We then use this time interval to move the object to the correct position in the new frame by updating its position attribute.

The calculations have four stages. Firstly we set the resultant force on the object to zero. Secondly, we allow collision detection to take place and thus any other objects are allowed to exert forces on the object or change the objects velocity. We add the forces applied to the sum of the previous forces, thus keeping a running total – this is the net force acting on the object. Once all objects that will apply a force to the object have done so (this is done by collision detection and handling) we add gravity to this resultant force (dynamics). We can then proceed to the fourth step – kinematics – calculating acceleration, velocity and position changes from the resultant force.

1.The Algorithm

All Components have a ThreadRoutine. VRPhysicsEntities are derived from VREntity and this in turn from VRComponent which is derived from Component. This method is called at each iteration of the system.

At each iteration, which corresponds to every frame, we need to know how much time has passed since the last frame was drawn. A timer sits in each object and this timer is zeroed after reading it – the reading is the time difference since it is always reset. Once we have the time interval, we pass it to Dynamics and then to Kinematics where it is used as we show below – time is a crucial variable in most of the calculations performed. The results of the calculations in Dynamics then affect those in Kinematics. Once we have run Kinematics, we leave the object ready to be rendered with all its attributes correctly updated according to our rules of motion in Dynamics and Kinematics. All that then remains is to zero the force on the object so that the next Dynamics call starts ‘afresh’.

The pseudo-code for each VRPhysicsEntity:: ThreadRoutine is as follows:


  1. timepassed = GetTimeFromTimer

  2. Reset Timer

  3. Dynamics (timepassed)

  4. Kinematics (timepassed)

  5. ZeroMyForce


The C++ for this algorithm is presented in appendix 13.2.

3.Dynamics

All VRPhysicsEntities have the same Dynamics and Kinematics interface. Objects that do not move (like walls) have empty bodies for both Dynamics and Kinematics since these methods deal with the object’s motion.

Dynamics is concerned with the forces acting on an object. It simply adds gravity to the resultant force acting on an object. Since all collisions in this system are handled by changes in velocity and not (more correctly, but more computationally intensive) by changes in force, the only other force that needs to be accounted for is gravity – and this only to objects that have mass (we don’t make any of these, but provision is here for objects to ‘hover’ – set their mass to zero and they don’t fall).

We state in the previous section that every object has their force zeroed after each iteration. The reason for this can now be seen. Consider the scenario that we do not zero the net force for an object after each iteration. Assume also that no other objects act on this object. The net force on the object begins as zero. On the first iteration, we would add g. The net force is now g. The next iteration we again add g – the net force would now be 2g. Each iteration would again add g. Obviously we must zero the force after each iteration.

1.The Algorithm

Dynamics is always run before Kinematics. Since the resultant force calculated in Dynamics is used in Kinematics, we can only zero the resultant force after Kinematics. We can also not zero the resultant force before Dynamics since forces may act on the object before we reach Dynamics. An example of this is the ball inside the anti-gravity pad – the ball ‘receives’ an upward force from the pad (since collisions are detected before the ThreadRoutine is run), then Dynamics is called and then Kinematics. So when we zero the force after kinematics, we in effect zero the force before collisions are detected.

Below is the pseudo-code for the Dynamics method:


  1. if I have Mass then

  2. gravity = G * DownVector

  3. AddToMyNetForce (gravity)


Appendix 13.3 shows the C++ code for this method.

4.Kinematics

Kinematics uses fundamental Newtonian equations of motion to translate net force to change in acceleration, acceleration to change in velocity and velocity to change in position.

We start with the net force acting on an object, f. From this we can calculate the change in acceleration from . This gives the new acceleration anew. Then, using the old value of acceleration, a, we calculate the change in velocity using . Similarly, using the old velocity value, v, we obtain the change in position from . In this way we update the acceleration, velocity and position of each object.

1.The Algorithm

Since Kinematics is a function inside the object’s class, it has access to all its attributes. We then apply the calculations above to these values in order to calculate the new values for these attributes. The VRSink that renders this object will then retrieve the (new) attributes (such as position) when drawing the object.

If we pass the time interval, t, to Kinematics, its pseudo-code is as follows:


  1. p = my current position

  2. v = my current velocity

  3. a = my current acceleration

  4. m = my current mass

  5. f = current force acting on me

  6. pdiff = v * t

  7. vdiff = a * t

  8. if m > 0

  9. {

  10. anew = (f / m)

  11. }

  12. else

  13. {

  14. anew = 0

  15. }

  16. SetMyPosition to (p + pdiff)

  17. SetMyVelocity to (v + vdiff)

  18. SetMyAcceleration to (a + anew)


The C++ for this algorithm is in appendix 13.4.

5.Wrapping Up the Motion Code

The applications themselves run the loop that controls the simulation. It appears as follows:


while (1)

{ RunComponents(); }


Since all entities are derived from VRComponent (see Figure 9) and this in turn from Component, all objects in the environment have a ThreadRoutine. RunComponents simply calls the ThreadRoutine of each object in the environment. Also derived from VRComponent are VRSink and VREnvironment. Hence their ThreadRoutines are also called by the RunComponents loop.


The way the environment is set up in the applications is that a sink is created and an environment is linked to it. This environment then contains all the objects that are created after this. So when RunComponents loops, the sink’s ThreadRoutine is first to run (the scene is rendered), then the VRPhysicsEnvironment::ThreadRoutine runs and collisions are checked for and handled and forces between objects (if any) are applied. Then each object’s ThreadRoutine is called and it’s Dynamics and Kinematics are run and it’s force zeroed. The cycle then repeats and the sink runs again.

The VRSink::ThreadRoutine is responsible for drawing every object in the scene. It uses OpenGL to do this. It retrieves the position, orientation and scale of each object as well as the object’s visual representation. It then renders each object according to these attributes.

For the physical modelling system, the environment used in VRPhysicsEnvironment which is derived from VREnvironment. The ThreadRoutine for the VRPhysicsEnvironment contains only one line of code:

GetAllCollisions();

GetAllCollisions is the method in VREnvironment which checks for collisions between every object. Any collisions detected are then handled by calling DealWithCollision, a method that is virtual and must be overridden for the system to work. The DealWithCollision method is covered in detail in chapter Collision Handling.

6.Summary

We show how we achieve motion in a virtual environment by shifting each moving object fractionally from frame to frame. We discuss how we calculate the new position the object has to move to by considering dynamics (the forces acting on the object) and kinematics (the acceleration and velocity of the object). We show the code we use to perform these calculation. In the next chapter, we present an overview of the objects in the environment and how they differ from and relate to one another.


6.VRPhysicsEntities – A Taxonomy

As we have stated before, one of the aims of this project was to create a framework of objects which can be used for physical modelling. The design of this framework is crucial for ease of use and extensibility. A number of concepts have been used in the design of this framework. This chapter deals with concepts such as object orientation and the design of virtual objects and looks at each object implemented. We document the objects that we design (and why we choose these specific objects) and introduce the applications they are used in.

1.Object Orientation

An obvious choice for the framework, object orientation provided features such as inheritance and data hiding. These concepts are so widely used and well understood that they will not be dealt with explicitly in this dissertation. However, object orientation forms the basis of the design of the framework.

2.Virtual Objects

Users of a particular virtual environment should readily be able to identify and work with objects in that environment. These objects should model real objects and thus behave like them. Here object orientation comes into great use since we can bundle together the actions an object is capable of (methods) and its properties (such as visual representation).

These virtual objects should adhere to the following guidelines [17, 18]:

  1. The objects should have affordances. These are elements of the object that explain its operation to the user. For example a virtual hand has fingers which suggest grasping or pointing actions.

  2. Mappings must exist between the user’s actions and the effects of these actions. Any input action the user performs needs a corresponding output action from the system.

  3. The environment should provide suitable feedback. The user should never doubt when an action has been performed or not. Suitable feedback should naturally follow from well designed mappings [19]. For example, when a finger is closed on a set of input gloves, the mapping of the glove closes that finger of the virtual hand letting the user know that his action has been noted by the system. Thus by a suitable mapping, feedback has automatically been achieved.

  4. Finally, constraints on each object need to be implemented. For example, solid objects should not pass through each other.

These guidelines have been used to design the objects in the physical modelling environment.

3.The Object Hierarchy

The framework of classes (and hence objects) that we create need careful design considerations. Understanding this framework will simplify the explanation of collision handling.

In order for us to have a physical modelling environment, we need to create an environment that contains objects that interact with one another. We identified several objects that we could use initially and then extend upon these.

We need a stationary object that is solid and immovable – a wall. The next most obvious object is a moving object – a ball. The ball can must bounce off the walls and not penetrate them. It must also be able to bounce off other balls and fall under gravity.

Modelling the collisions between ball and wall should be done at a force level – when the ball collides an impulse pushes it backwards and it bounces. However, working with collisions at a force level is computationally too intensive and so we handle these collisions by a change of velocity – when the ball collides with the wall, we reflect the ball’s velocity in the plane of the wall (the collisions are elastic). The calculations involved in this method of collision handling are far simpler and the collision is still modelled accurately (the user perceives the collisions to be correct).

We then want an object that behaves similarly to the wall but that interacts more with the ball – we create a conveyor belt. When the ball bounces on the conveyor, its velocity is reflected (just like the wall) but the conveyor also gives the ball an extra push sideways.

We also wanted an object that affects the ball in some bizarre way – we create an anti-gravity pad. When the ball is within the gravity pad, a force equal to gravity, but upwards, is applied to the ball. The ball then falls upwards.

These objects represent a cross-section of different objects to be modelled – any other objects would simply be modifications or combinations of the above objects.

From these four objects we identify two main ways of interacting – an exchange of force and an exchange of momentum (or velocity). We can thus create a framework of objects that sets up specific ways of exchanging these quantities during a collision – we move toward a generalised way of handling collisions.

We show the design philosophy of each object in the following figure:

Figure 8 - The design philosophy of VRPhysicsEntities.

We design each object to have pairs of methods to change the attributes of the object - Get and Set operations (methods like GetNormal and SetNormal). Fundamental to this system are the Get and Give pair for force and momentum. These form the basis for collision handling and are dealt with in detail in chapter Collision Handling.

The figure above shows how in VRPhysicsEnvironment all attributes ‘flowing into’ the object are denoted by GiveAttribute while attributes ‘flowing out’ are denoted by GetAttribute methods.

By changing the values these Give and Get methods transfer we can specify the behaviour of an object (they are virtual in the base class and must be overridden). For instance a ball will return its momentum for GetMomentum while the wall returns nothing.

Figure 9 shows the inheritance of VRPhysicsEntity – the base class for all other physics objects. This diagram is only a small subsection of the CoRgi system.



Figure 9 - Extract of the CorRgi object hierarchy tree.


Figure 10 - The VRPhysicsEntity class diagram.

4.VRPhysicsEntity – the Base Class

All the objects have a common thread – they all have attributes such as position, orientation, net force, velocity and so on. So we design a common interface – a common way of interacting.

We isolate some of the common attributes and methods needed for each object – these are presented in the class diagram of VRPhysicsEntity shown in Figure 10. The diagram has been simplified to show only those attributes and methods which are of particular interest to this project. Methods such as constructors and destructors are not shown here.

1.The Attributes

The following attributes are considered important – mass and velocity are crucial for momentum and the collision calculations (see chapter 6). Then force is important (both the net force acting on the object and the amount of force one object can apply to another) again for the collision handling function. Each object has its own gravity variable, G. The default for this value is 9.8m/s2 but this value can be changed. This allows objects to behave differently even within the same environment.

Each object then has a physics type and a collision type. The physics type (of the set { ball, wall, conveyor, agpad, none } ) is used to determine how each object behaves – walls behave differently from balls and need different information. The collision type is used to determine how to handle collisions between objects. We add this feature to the design, but do not implement it. Two collision types exist – round and square. Round collisions take into account the shape of the object when handling collisions while square collisions do not.

Also important to objects such as walls is the normal attribute. This attribute is used to reflect colliding object’s velocities off the front face of the object. If we know the normal to the face, we can calculate the orientation of the face.

2.Creating Objects

When a VRPhysicsEntity is created, it adds a pointer of itself to a list of VRPhysicsEntities that is globally accessible. This list is used by collision detection function to run through all the objects.

To create a ball, for instance, we make a variable of the type VRPhysicsBall. The constructor then creates the object in the environment. .off files are passed to the constructor of each object – these files contain the vertex and face and normal information needed to render the object.

5.VRPhysicsWall

A stationary object found in most virtual reality applications is a wall. The VRPhysicsWall is modelled to have an infinite mass. Since there is no number for infinity, it has a mass of –1. The walls do not move. Its normal is set to the normal of its front face and is used by colliding objects.

6.VRPhysicsBall

We want an object that is mobile and that we can bounce off other objects. We choose a virtual ball for this purpose. This physics entity has a particularly interesting feature in its design – its GiveMomentum method (see section VRPhysicsBall). This method does all the collision handling calculations and knows how to bounce the ball off other balls as well as walls where its velocity is reflected in the plane of the wall.

7.VRPhysicsConveyor

Collisions in VRPhysicsEnvironment are handled by changing the velocity of the colliding objects rather than applying impulse forces (this is discussed in more detail in chapter 6). We want then to have a stationary object that not only reflects the colliding object off it (as is the case with the ball bouncing off the wall) but also adds a sideways component of velocity to the colliding object. Hence we design the conveyor.

8.VRPhysicsAGPad

All objects that enter the ‘gravity well’ of the VRPhysicsAGPad (anti-gravity pad) experience negative gravity. This effect is achieved by applying a force of –2g to the object within the gravity well. When the gravitational force of g is added to it, the net result is –g. The AGPad is stationary.

9.VRPhysicsEnvironment

All the objects we discuss up till now are objects that have no direct knowledge of other objects – they are independent of one another (until they collide, that is). However, they all occupy a region of virtual space that we call an environment.

The environment is chosen as the ‘control centre’ of the simulation. The environment is responsible for checking for collisions that occur within it (as well as dealing with them). It must therefore know the current attributes of all the objects inside it.

1.Inside the VRPhysicsEnvironment

We place the collision detection function in VREnvironment – higher up the hierarchy than VRPhysicsEnvironment. This allows other applications to make use of the collision detection function. However, we leave the function DealWithCollision as a virtual function so that it can be overridden by users for different purposes.

The VRPhysicsEnvironment calls the collision detection routine at every iteration (which is at the beginning of each frame). It also holds the DealWithCollision method which performs collision handling when required. For instance, in the GetAllCollision method of VREnvironment (which VRPhysicsEnvironment inherits) each time a collision is detected between two objects the DealWithCollision method is called. The objectID’s of both objects involved in the collision are passed as parameters to DealWithCollision and DealWithCollision then performs the necessary adjustments to the two objects (like change of velocity or force and so on).

10. The Applications

We create two applications to test and view the objects that we create – vrphysicsapp displays all the objects that the user cannot control – wall, ball, conveyor and anti-gravity pad. Once the user has placed the objects in their initial places, the simulation is run and the objects can be watched as they interact. Chapter 8 shows some of the results of this application.

The second application is used to view and test all of the objects that appear in vrphysicsapp with the addition of two user controlled objects. This application – vrttapp – is the subject of chapter 7.

11. Summary

We show the paradigm we design the framework under – namely object orientation. We present some guidelines used in the design of the virtual objects. We give the inheritance diagram and how VRPhysicsEnvironment fits into CoRgi. We show the base class and name all the objects created and why we chose to implement them. The following chapter deals with collision handling – what each object does when involved in a collision. Finally, we mention the two applications that we create to run the simulations.


7.Collision Handling

Collision detection is used to detect when collisions occur. It does not do anything about the collisions. This is the focus of collision handling. In this chapter we show how we implement a generalised collision handling algorithm and discuss it in detail, looking at the code used. We also explore the methods of the objects which allow the generalisation of the collision handling algorithm.

1.Generalisation

When two objects collide, they will interact in some way. Generally, if there are n objects, there are n! possible interactions1. So adding one object to any environment presents a large problem for describing interactions.

This makes extensibility extremely slow and arduous. For each object we add to the system, we must add at least one, but generally more than one collision handling method.

Considering VRPhysicsEnvironment, there are at present there are only a few collisions possible – they are the ball and another ball, ball with wall, ball and conveyor and ball and AGPad. Already with only four objects, we need four different collision handling methods – one for each collision mentioned above (this is less than the expected 4! since some of the objects we consider cannot collide, such as two walls).

Bearing this in mind, we propose a different approach – a generalised collision handling algorithm. We use specific general object interactions. We choose one or more attributes that are going to change in a collision and determine how to couple the objects in such a way as to obtain these changes. The two we use at the moment are Force and Momentum.

When two objects collide, we model the collision by a change in the momentum of each object. Strict physics would calculate the forces involved in such a collision, but this would then involve complex mathematics and integration. So we choose to modify the momentum (or more accurately the velocity) of each object to model the collisions.

We use two laws of physics to calculate the change in velocity of the colliding objects – conservation of kinetic energy and conservation of linear momentum. The change in velocity of one object depends on the mass and velocity of the other (hence we use the term momentum, which is velocity multiplied by mass). So we define object interactions in terms of Get momentum from one object and Give it to the other and vice-versa.

Although we do no handle collisions at a force level, we also add the ability for each object to change the force acting on the other object.

This means that the collision handling algorithm does not need to know what type of objects are colliding – it calls the same functions for any collision. When new objects are added, we do need to modify the collision handling code – we specify each object’s response in the object class itself.

2.The Algorithm

Figure 8 shows the Get/Give philosophy we create the objects under. We now show how we use this philosophy in our calculations.

Each object has GetMomentum and GetPhysForce methods. These methods return the momentum and force of the object respectively. Each object then also has a GiveMomentum and a GivePhysForce method. These methods are used to ‘give’ momentum and force to the objects. (The names Get and Give PhysForce are used to distinguish from the method GetForce which is a GetAttribute function of VREnvironment – see section 6.5.)

The collision handling algorithm must retrieve all the relevant information from each object – it does the swapping of force and momentum - the objects themselves perform the calculations. This frees the collision handling from a lot of clutter and allows us to make it general.

We pass the objectID’s of the two colliding objects to DealWithCollision. The algorithm for collision handling looks as follows (see appendix 13.5 for the C++):


  1. object1_position = GetPosition (object1)

  2. object2_position = GetPosition (object2)

  3. direction = object2_position – object1_position

  4. object1_momentum = GetMomentum (object1)

  5. object2_momentum = GetMomentum (object2)

  6. object1_force_to_apply = GetPhysForce (object1)

  7. object2_force_to_apply = GetPhysForce (object2)

  8. object2->GivePhysForce (object1) // apply object1_force_to_apply to object2

  9. object1->GivePhysForce (object2) //apply object2_force_to_apply to object1

  10. if (object1 is stationary)

  11. reflect object2 velocity in plane of object1

  12. else

  13. if (object2 and object1 are both round collision types)

  14. object2->GiveMomentum (object1_momentum, direction) // give object2 object1_momentum

  15. else

  16. object2->GiveMomentum (object1_momentum)

  17. if (object2 is stationary)

  18. reflect object1 velocity in plane of object2

  19. else

  20. if (object1 and object2 are both round collision types)

  21. object1->GiveMomentum (object2_momentum, direction) // give object1 object2_momentum

  22. else

  23. object1->GiveMomentum (object2_momentum)


Sometimes no force is exchanged. For example, when the ball collides with the wall we model the collision using only change in velocity (the ball’s velocity is reflected off the plane of the wall). So GetPhysForce from the wall will return a zero vector. Likewise GetPhysForce from the ball will return a zero vector. However, GetPhysForce from the AGPad will return –2g. The Get and Give Momentum and PhysForce for each object are the subject of the next sections.

3.GetMomentum

GetMomentum is used to obtain the velocity and mass of each object. Each object returns a value for mass and a value for velocity when this method is called. Some objects simply return these values as they are. Others, such as the wall, return constant numbers. This allows designers freedom to dictate how any object will interact at this level.

This method is virtual in the base class – VRPhysicsEntity – and each object must override this method. This method allows each object to ‘give’ momentum in a specific and unique way. The method returns the mass and the velocity of the object by requiring parameters to be passed by reference. If no momentum is to be transferred from the object being called, the object simply returns the zero vector for its velocity.

1.VRPhysicsAGPad and VRPhysicsWall

The VRPhysicsAGPad and VRPhysicsWall return the same values for mass and velocity: -1 and the zero vector respectively.

2.VRPhysicsBall

The VRPhysicBall returns its current mass and velocity.

3.VRPhysicsConveyor

The calculations performed for receiving momentum involve a velocity and a mass (see section 6.4). We reflect the ball’s velocity off the surface of the conveyor and also return a velocity and mass to be given to the ball – the calculations then add this momentum in automatically. The following pseudo-code shows how the VRPhysicsConveyor adds a velocity component to objects colliding with it (see appendix 13.6 for the C++):


  1. v = unit vector in direction of my length

  2. m = 10.0


Both v and m are passed by reference and therefore are returned to the calling function – DealWithCollision.

4.GiveMomentum

GiveMomentum is used to calculate the change in velocity of the object when it is involved in a collision – this method does all the collision calculations for its object. Again by putting the calculations in each object, designers can specify exactly how an object is to behave. Some objects do nothing (such as the wall) while others stick closely to the calculations we show below. This method is also virtual in the base class. Each object overrides this method to determine how to respond when ‘given’ momentum.

We consider the general case – two moving objects, 1 and 2, with mass m1 and m2 respectively, travelling with initial velocity v1i and v2i respectively. They then collide. The law of the conservation of kinetic energy states that …(1) where v1f and v2f are the velocities of the objects after the collision. Coupled with this we use the law of the conservation of linear momentum - …(2). We rewrite equation 1 as …(3) and equation 2 as … (4). We then divide equation 3 by equation 4 and after solving for we obtain .

And so we now know by how much each object needs to modify its velocity by – it is dependant on the mass and velocity of the other object. We need only calculate v1f since we use this from each objects reference frame – we simply swap subscripts from 1’s to 2’s (swap reference frame) and the value then for v1f is actually v2f from the original reference frame.

We must then also consider the case of a moving object colliding with a wall or some other stationary object – since we model the wall to have an infinite mass, our calculations start to go awry. So we make an exception – if the object 1 is stationary (and by that we mean is a surface to bounce off, as opposed to a non-moving object), we keep the magnitude of the velocity of object 2 constant and simply change the direction by reflecting the velocity off the surface of the wall.

1.VRPhysicsBall

Since the ball is the only moving object we implement, its GiveMomentum function must know how to respond to static objects that it collides with and also with collisions with other moving objects – it must implement all the calculations we mention above. Other objects, like the wall, do not need all these calculations. VRPhysicsBall has ‘collision intelligence’ for both stationary and moving objects.

DealWithCollision passes the momentum (m2 and v2i) of object 2 to object 1. Knowing these, we can then perform the following pseudo-code (appendix 13.7 holds the C++):


  1. m1 = GetMyMass

  2. v1i = GetMyVelocity

  3. if I hit a wall

  4. v1f = Reflection of v1i in plane of wall

  5. else // I hit something else so

  6. if m2 is not infinite

  7. v1f = (m1 – m2) * v1i + (2 * m2) * v2i

  8. (m1 + m2) (m1 + m2)

  9. if I have hit a surface

  10. v1f = Reflection of v1f in plane of surface

  11. SetMyVelocity (v1f)

2.The Rest

The rest of the VRPhysicsEntities have empty GiveMomentum methods. This means that they do not respond when ‘given’ momentum.

5.GetPhysForce

GetPhysForce is used to obtain a force from one object in order to apply it to another.

This is a similar method to GetMomentum except that it returns a force vector. Again, most objects simply return a zero vector. The VRPhysicsAGPad returns a vector with magnitude 2g with direction parallel to the y-axis (upwards).

We distinguish this from GetForce – a method in VREnvironment – since GetForce returns the force acting on the object and GetPhysForce return the force we must apply to another object from this one.

6.GivePhysForce

None of the objects responds to a force (the GivePhysForce method is empty) except for VRPhysicsBall. The pseudo-code looks like this (f is passed as an argument):


if f is not 0

{

AddToMyNetForce (f)

}


The effect of this code is that if the force is not a zero vector, it is added to the object’s current force.

7.Summary

We show how and why we implement a generalised collision detection algorithm. We look at the code closely for the collision handling function and then examine the Give and Get Momentum and PhysForce of each object. We compare the differences in these methods for each object and show how they give each object unique object interactions without changing the collision handling algorithm. In the next chapter, we deal with two more objects which differ greatly from the objects seen so far – VRTTBat and VRTTHand, objects used in virtual table tennis.


8.Virtual Table Tennis

Once created in code, an application must be developed to create an instance of the environment and to place all the objects in the environment. The simulation must then be run, allowing the user to see the interactions between the objects. For this purpose, we introduce the vrphysicsapp. Although this application is interesting, the user does little more than simply watch the objects as they move and interact.

None of the objects we create can be controlled by the user. We seek to add another aspect to physical modelling that is not usually implemented – allowing a user to control an object. We hence create a new application – virtual table tennis – in vrttapp.

This chapter introduces two objects that the user can control – VRTTBatPhysActor and VRTTHand. We show how we obtain information from the user via polhemus trackers and how the user controlled objects interact with the other objects.

1.The Scenario

For virtual table tennis we will need a table tennis table, a table tennis bat and a ball. Because we did not want to be bogged down creating multiple users in a first attempt, we choose to ‘fold’ the table up and allow the user to play table tennis against themselves. Adding multiple users is a logical and straightforward task which is not attempted here for the sake of simplicity. And then just in case the user loses the ball, we add a hand to the environment. When the user closes his fist, the hand closes and the ball magically appears in the hand. The user then opens his hand to release the ball.

Physically, then, we require the user to don various equipment. The user wears a head-mounted display (HMD). This device tracks the movements of the user’s head in order to change the viewing direction, as well as displaying the environment stereoscopically to the user.

Secondly, the user needs to hold a polhemus tracker in one hand – this represents the bat. In the HMD the user will see a table tennis bat. Moving this hand will move the bat.

Next the user needs to hold a further polhemus tracker in the other hand that is connected to a glove device. This is used as the hand – the user sees a hand in the HMD. When the user moves this tracker, the hand moves and when the user moves their fingers the same movements are displayed in the HMD.

Figure 11 - The table tennis application.

Figure 11 shows the table tennis application and how the table has been ‘folded’ to allow a single user to play table tennis. The hand is not visible in this screenshot. Also visible in this shot is the bounding box of the bat and the ball.

2.VRVelocityPolhemusInputDevice

In order to track the position of the user’s hands (one for the VRTTHand and one for the VRTTBat) we need a tracker that will return a position with x, y and z co-ordinates. Once we have this, we simply draw the corresponding object at that position.

For this purpose, Mike Rorke creates VRPolhemusInputDevice, a class which allows CoRgi programmers to determine the position of a polhemus tracker. This class encapsulates all the initialisation of the actual device and other housekeeping functions. The device is physically connected to a device server. When the device changes states or receives data, the device server bundles this information into a packet which is sent to the simulation server – for our purposes, our vrttapp application.

To use the device class, an actor must be created. This actor has a visual representation in the virtual environment and is linked to the device via a constructor. When a position change occurs, the position of the actor is updated and is drawn in the new position in the next frame.

This means we can keep track of where the bat is. But how does it interact with other objects? In order to solve this problem, we chose to treat the bat as a type of ball (since it is a moving object) – in other words, its interactions with other objects would be similar to those of VRPhysicsBall. However, in order to do this, we need more than just the position of the bat – we need to know its velocity.

Since we know the position of the bat and the current time at any stage in the simulation, we can derive the velocity of the bat. We keep a counter that holds the last position of the bat - pold. The timer inside the bat works on time intervals and holds the this as tdiff – hence we can calculate the velocity, v, at the current position p from .

Because the polhemus tracker has such a fine resolution (within 1mm) we set an empirical threshold distance. If the distance moved (p – pold) is smaller than this threshold value, we simply update the position and orientation of the bat so that the virtual bat moves as expected. However, we require the distance moved to be over the threshold in order to warrant a recalculation of velocity. When below the threshold, the velocity of the bat remains the same – when over the threshold, the new velocity is calculated.

A simple modification is necessary to the existing VRPolhemusInputActor – we inherit VRVelocityPolhemusInputActor. The following psuedo-code shows how we track the velocity and position of the bat:

  1. tdiff = GetTimeFromTimer

  2. Reset Timer

  3. p = GetPositionFromDevice

  4. orientation = GetOrientationFromDevice

  5. distance = p – pold

  6. if (distance > threshold_distance)

  7. v = distance / tdiff

  8. pold = p

  9. SendVelocityToActor (v)

  10. SendPositionAndOrientationToActor (p, orientation)


Appendix 13.8 contains the C++ code for this algorithm.

Now that we know how the user is able to change the position and velocity of the bat, we can look at how the bat interacts with other objects in the environment.

3.VRTTBatPhysActor

The device we track has to have some virtual representation so that it can be rendered in the environment. For this purpose we create an actor and link this actor to the device we wish to track. The device updates the position and orientation attributes of the actor which is then rendered in each frame accordingly.

Not only does the actor have to have a representation, but we need to be able to interact with this object as with all other objects. Hence we need to create a physics object (for the object interactions to be consistent) as before which also has the functionality of an actor.

The VRTTBatPhysActor is derived from VRInputActor and also from VRPhysicsEntity. We add one function, SetInputVelocity, to allow the device this VRTTBatPhysActor is linked to to update its velocity. This function also makes use of an empirical value – this time to scale the velocity reading received from the device into a velocity that is consistent with the objects in the environment. The pseudo-code is as follows (new_v is passed to this function from the device):


new_v = new_v * scale_factor

SetMyVelocityTo (new_v)


We also find that a similar scaling of the change in position from the device is required. We multiply the position co-ordinate we receive from the device by an empirical value by overriding the SetInputCoordinates function of the VRTTBatPhysActor.

All the empirical values used to scale are necessary since the measurements in the virtual environment differ from the measurements of the polhemus tracker. For instance, moving the polhemus tracker 1cm in real life may correspond to a change of 15 virtual units. We experiment until the movements and velocities viewed are as expected.

The bat van interact with other objects in the environment simply because the VRTTBatPhysActor is a VRPhysicsEntity, and has the usual Get and Give Force and Momentum functions. GetForce returns a zero vector; GiveForce and GiveMomentum have empty bodies. GetMomentum returns the mass of the bat as well as its velocity which is being actively changed as the user swings the bat around.

4.VRPhysicsBall

The only object that the VRTTBatPhysActor can actually interact meaningfully with is the ball – all other objects that collide with the VRTTBatPhysActor have no effect on it. There is a small section of code which appears in the VRPhysicsBall’s GiveMomentum method which shows more clearly a subtle problem that the system has. This problem arises due to the fact that the system is discrete – time is not continuous but events happen at discrete time intervals. This problem is discussed in its bearing on the whole VRPhysicsEnvironment, but we discuss it here because this example of the problem is most pronounced.

Figure 12 shows the problem diagrammatically in a series of frames. In the top frame, the ball and bat are headed for a collision. The collision is correctly detected in the middle frame and the ball’s direction is reversed. Now in the bottom frame the ball has moved to the correct position (it moves independently of the bat) but the bat has moved too far and the ball penetrates the bat.

Figure 12 - Three frames of a bat-and-ball collision showing penetration of the ball into the bat.

What should happen is that a whole series of small collisions should be detected. This would ensure that the ball never ends up inside the bat. However, since time is not continuous in the system a whole series of events is left out.

We devise a simple method of overcoming this problem. We update the velocity of the ball as per usual, but then we also move the ball a small distance in the direction of its velocity, hoping to move it out of reach of the bat and preventing penetration. We note that this method reduces the number of penetrations that occur, but does not prevent these totally. Large movements can still cause penetrations since the ball is not moved far enough away from the bat. However, increasing this distance too much makes the ball appear to disappear and then reappear a short distance away and is not visually appealing.

The pseudo-code for this solution is as follows (see appendix 13.9 for C++):


  1. if I have collided with the bat

  2. v = GetMyVelocity

  3. scale_factor = GetTheScaleFactor

  4. distance = scale_factor * v

  5. MoveMeByDistance (distance)

5.VRTTHand

In order to assist the user in his quest to play the perfect game of table tennis, we provide another user-controlled object – the VRTTHand. This object is used to get the ball if it disappears out of reach. This object is simply an extension of the VRHandActor class – VRTTHand contains a pointer to the table tennis ball.

We now provide the user with two gestures: clenched fist and open hand. When the user clenches their fist, the ball’s position is updated to place the ball at the hand. In this way the user never has to go and fetch the ball – if they lose it, they simply clench their fist and the ball appears at the hand.

When clenched, the ball follows the motion of the hand. When the user wants to serve, they simply open their hand and the ball is freed from the hand and behaves as before.

The hand is also connected to a (different) polhemus device in order to update its position as the user moves. Also connected to this actor is a glove device. Both these devices are connected to the actor in the same way that we connect a polhemus tracker device to the VRVelocityPolhemusDevice. The glove device updates the appearance of the hand as well as being able to trigger events according to certain gestures.

6.Summary

We show a different application we create in order to test some of the objects already created as well as allow us to introduce two new objects which are user controlled – VRTTBatPhysActor and VRTTHand.

We also introduce a problem inherent in the system – that of discrete time events. We discuss this problem and others in the next chapter, as well as solutions to these problems and also ideas for extending the project in future work.


9.

10.Results - The Systems In Action

The previous chapters discuss the design and implementation of VRPhysicsEnvironment in detail. Although not quite the same as being immersed in the environment, we wish to give the reader a small taste of the system. This chapter shows some screenshot series to illustrate the working systems. We then discuss briefly the good points of the system and how they show VRPhysicsEnvironment to be a good physical simulator.

1.VRPhysicsApp in the Flesh

The following series of screenshots shows the vrphysicsapp working. We place three balls, two walls, an antigravity pad and a conveyor in the environment and run the simulation. Shown from left to right in the first frame are: conveyor, anti-gravity pad, three balls and two walls (the walls are borrowed from the table tennis application). (Note that the current simulation time is seconds appears in the lower left corner of each frame.)

All three balls are travelling downwards – the blue and green vertically and the red slightly to the right.








The red and blue ball collide. The green ball goes into the anti-gravity well of the anti-gravity pad.

The blue ball collides with the ‘floor’ (a wall facing upwards). The green ball slows down, stops and starts to move upwards.

The blue ball (still moving upwards) collides with the wall and begins moving left. The green ball is still moving upwards and comes out of the anti-gravity well.

The red ball bounces off the floor and collides with the blue ball. The blue ball starts going upwards again while the red ball starts going downwards.

The red ball bounces off the edge of the floor.

The red ball is moving slightly left as it goes upwards – it enters the anti-gravity well and collides with the green ball sending it up and left.

After a few seconds, the green ball comes into the field of vision again. The red ball has been going in and out of the anti-gravity well and the blue ball has been bouncing off the floor.

The green ball collides with the conveyor and is given a sideways (to the right) velocity as it begins moving upwards again.


2.VRTTApp in the Flesh

Again we show a series of screenshots that show a user playing table tennis. We create the table, the bat, the ball and the hand. The user then plays table tennis. (Again simulation time in seconds appears in the bottom left corner of each frame.) The sequence of events are: user hits the ball, ball bounces off table and out right hand side of the frame, the hand then retrieves the ball and the user hits the ball towards themselves.

The bat hits the ball (middle frame below). The ball moves down and right and bounces off the table.

The user uses the hand to retrieve the ball (third frame below).



The user releases the ball and moves the bat towards the ball.

The user hits the ball towards the ‘camera’.

(The videos for simulations similar to these is in appendix 14 – the CD – subdirectory movies).

3.Where VRPhysicsEnvironment succeeds

We identify three key areas where VRPhysicsEnvironment works well – these three areas show how successful any physical simulator is.

1.Visually appealing

The objects in VRPhysicsEnvironment are visually appealing. The every-day objects (such as the hand and the table tennis table) are easily identifiable. The other not so common objects are clearly presented. The objects are smoothly shaded and so increase their three dimensional appearance.

2.Real Time

There are two areas that need to be fast in physical simulators: overall object movement and response to user input.

As far as overall object movement goes, VRPhysicsEnvironment repsonds remarkably well. The objects move smoothly and don’t jerk – collisions and collision handling is seemingly instantaneous with no delays at all. The objects respond to one another immediately. For instance when the ball enters the anti-gravity field, the user can immediately notice that a change in it’s trajectory occurs.

The system responds immediately to user input. When the user moves the hand or the bat, the corresponding movement is immediately output in the environment. When the bat strikes the ball the ball responds instantaneously with no delay for the collision detection and handling.

3.Physically correct

The objects are seen to fall under gravity (the moving objects – the balls) and they are seen to accelerate as they fall. None of the objects pass through other solid objects (with the exception of a few penetrations of the bat and hand into other objects). The collision handling is viewed to produce output that is expected – the balls bounce in the correct direction after colliding with the walls. Other effects behave as expected (such as the anti-gravity effect and the sideways push of the conveyor).


4.Summary

We present a ‘paper immersion’ into VRPhysicsEnvironment and discuss the three areas of a physical simulator that we started out to produce have in fact been accomplished – visual appeal, real-time running and physical correctness. The following chapter shows some of the problems in VRPhysicsEnvironment.


11.Problems and Extensibility

Looking at VRPhysicsEnvironment as a whole, a number of subtle problems begin affect the system. Although difficult to see, the effects of these problems are more noticeable. For instance, at times the ball seems to penetrate the wall and jiggle around for a few moments before moving away from the wall again. This chapter expounds on some of the known problems and suggests some plans of action in order to solve them. We also discuss extending the project in future work.

1.Discrete Time Intervals

In our discussion about virtual table tennis we point out that discrete time events can cause strange effects. We discuss this topic in more detail here.

1.The Problem

The algorithms we use make assumptions – for instance, velocity is calculated as distance over time – but this only really works well when the time interval used is small. Figure 7 (page 38) shows how the flow of events is in CoRgi – unfortunately, on most platforms (even SGI’s) the time taken to render the scene is relatively large in comparison to the time taken for the rest of the loop. Hence the time intervals between successive calculations is too large.

This problem explains why the ball appears to penetrate the wall. The figure below shows this graphically. The ball begins at position 1. It is moving down and right and follows the arrow. At the next iteration, because of the time interval that has passed, the ball ends up at position 3. However, as can been seen from the diagram, the collision should have been detected at position 2.


Figure 13 - The ball penetrates into the wall.

2.The Solutions

We suggest two possible solutions to this problem. One is geometric, the other is more analytical.

1.The Geometric Approach

This approach is relatively simple, and although based in a geometric paradigm, involves analytic calculations as well.

A bounding box is constructed that surrounds the ball in it’s first and final position, i.e. positions 1 and 3 (see Figure 14). Collision detection is done as usual, and if a collision is detected within this newly constructed box, further calculations are required to resolve where the collision took place as well as handle the collision.


Figure 14 - The geometric approach - a bounding box is constructed around the ball in position 1 and 3.

This approach has the disadvantage that the bounding box surrounding the object cannot be easily created. In fact, in order to do so we would need to rotate the objects around until they lie parallel to an x, y or z axis, create the box and rotate the box around again. However, once the box is constructed, collision detection is easy.

2.The Analytic Approach

Another solution involves changing the flow of events in CoRgi. The flow of events in CoRgi presently is shown in Figure 7 (page 38). However, we could modify this flow to include a check to see if collisions could have occurred during the rendering time. Stated otherwise, we move our calculations forward at smaller time intervals than our rendering. Then even if collisions are ‘missed’ between renderings, our calculations still detect and handle them (we catch the collision at position 2 of Figure 13).

The figure below shows diagrammatically what the new flow of events would look like:








Draw Scene




Calculate new positions



At least 1 large movement or collision



No large movements or time period over



Scale + Restrict movements




Calculate time fraction






Detect Collisions




Figure 15 - An improved flow of events.

Here we see that after rendering (drawing the scene), we calculate the new positions of the objects as usual. We measure T, the time interval that has passed since the last rendering. We then determine whether or not any movements (the distance from initial to final positions) are past (greater than) an empirical threshold value – if they are, reverse the movement slightly (scale and restrict movement). A scaling of the distance corresponds to a proportional time reversal (in calculation time) and so we calculate the fraction of time T, f, we have allowed to pass. We then do collision detection as usual. If there are no movements that are too large, forward calculation time to T and start again. Otherwise, if there are still movements which are too large, repeat the calculations advancing calculation time by f. If advancing by f we reach T (we advance by f at each iteration and so will eventually catch up to T), we render anyway assuming that any collisions still detected are correct.

This approach is computationally expensive but is more rigorous than the geometric approach.

2.Gaining Energy

Another problem with the system is that objects appear to gain energy. When a ball is made to bounce off a floor, we expect it to descend, bounce and rise to its starting height again (since the collisions are elastic – non-elastic collisions would make the ball bounce lower then its starting height). However, the ball is observed to bounce slightly higher every time.

We attribute this error to floating point precision as well as the errors that come into the calculations because we handle collisions with velocity rather than with forces.

3.Problems With User Controlled Objects

As we mention in chapter Virtual Table Tennis the objects that are controlled by the user often pass through objects. Again since we are using discrete time intervals we can explain this phenomena. In terms of the analytic approach (section 8.1.1.2.2), the user often moves the object so that its movement is too large. The analytic approach to solving the discrete time interval problem will also solve this problem.

1. Extensibility

One of the main concerns of this project is to design a framework of objects which can be added on to with as little difficulty as possible. Areas which need addressing are the actual creation of new objects, the ability to tailor new objects to behave in a unique way, the motion of the new objects and how new objects interact with existing objects (collision handling).

This chapter sets forth two main types of extensibility – modifications to the existing system and additions (features which have not been seen at all in the project). We present some modifications we deem reasonable as well as one specific addition.

1.Modifications

A number of modifications to the existing system can be made. They fall into two main categories: adding new objects and improving collision detection.

1.Adding New Objects

In order to add a new object, we need to consider its motion (dynamics and kinematics) and its behaviour with respect to other objects (collision handling).

It is only when we add new objects that the advantages of a generic collision handling algorithm become clear.

Consider the scenario that we did not have a generic collision handling algorithm and we have two objects in existence – a wall and a conveyor. We now wish to add the ball to the environment – we would have to make a function to handle the collision between the ball and the wall and another function for the ball and conveyor colliding. For the ball/wall function, simply reflect the velocity of the ball in the plane of the wall. For the conveyor/wall function, do the same but also add an additional sideways velocity to the ball.

So when adding one object we have not only to create the object, but also a n collision handling functions (where n is the number of objects already present in the system).

However, since we have a generic collision handling algorithm, we do not need to add any other collision handling functions – we simply define the behaviour of the new object by defining the actions to be taken for the giving and getting of force and velocity.

For example we could easily add a helium balloon to the system. We give it the same Give and Get Momemtum functions as the ball (we wish it to bounce off walls in a similar manner to the ball and for it to convey its mass and velocity for exchanging momentum). We have an empty GetForce method (it does not give a force in a collision) and the GiveForce method simply adds the received force onto its current force.

For its dynamics we do not add gravity in a positive fashion, but scale and reverse the gravity ‘felt’ by the balloon in order to give it the appearance of floating upwards. The kinematics of the balloon are the same for that of the ball.

No other modifications to any code need be made whatsoever. Hence adding a new object is confined to modifying code within that object and nothing else – making the job of adding new objects relatively simple.

2.Improving Collision Detection

At present the collision detection is a good basis for future work. The first obvious improvement is to create a hierarchy of bounding boxes to approximate the shape of objects in the environment. This would yield more boxes to check for collisions but would more accurately approximate the objects.

Secondly a sweep-and-prune algorithm could be implemented over and above the existing algorithm. This would involve changing the existing algorithm to check for collisions between boxes that are close to one another instead of checking all the boxes against each other.

We have already alluded to collisions that not only take velocities into account, but also the shape of the objects (see section The Algorithm – the discussion on the Round_Coll and Square_Coll variables). This involves a more rigorous solution of end velocities than at present but is a reasonable extension to VRPhysicsEnvironment.

Another avenue which could be taken to improve the collision detection is to allow for different bounding volumes to be used. Spherical objects are enclosed more tightly by bounding spheres and rectangular objects more closely by bounding boxes. Hence an algorithm that did collision detection between same and different types of bounding volumes could add accuracy to the algorithm.

2.A GUI Addition

At present when creating an application that uses VRPhysicsEnvironment, the position of the objects is specified by a .def file containing all the object names and various fields. This is done to prevent having to recompile code when adding more existing objects into the environment or modifying objects at the start of a simulation. As an example, here is an extract from physworld.def, the .def file used for vrphysicsapp:


  1. Object1-Name:object_files/agpad

  2. Object1-Pos:12.0 -25.0 20.0

  3. Object1-Normal:0.0, 1.0, 0.0

  4. Object1-Type:O_agpad

  5. Object1-COLLType:SQUARE


This is tedious when the user wishes to have a number of objects in the environment for the simulation. So a useful addition to the VRPhysicsEnvironment would be a toolkit with a graphical user interface (GUI) to allow the user to create, position and modify objects in the environment prior or even during a simulation.

Furthermore, additions could be made to the simulation itself. For instance all collisions at present are elastic – taking elasticity into account is a viable extension to VRPhysicsEnvironment. Friction and air resistance are also examples of physical properties that could be added to the environment.

Still a further area in which extensions could be made is to allow the objects to deform. For instance when a ball collides with a wall it could squash and then ‘unsquash’ as it left the wall.

2.Summary

We deal with three major problems that are present in VRPhysicsEnvironment, from discrete time interval problems to objects gaining energy and problems with user controlled objects. We also suggest two approaches to solving the discrete time interval problem – one geometric, involving a new bounding box, and the other analytic involving a complicated change to the flow of events in CoRgi.

We present several modifications that could enhance and improve VRPhysicsEnvironment – from the easy addition of objects to ways of improving the collision detection algorithm. We also propose that the environment be immersed into a graphical user interface toolkit which will allow the user more freedom in the system and make creating simulations easier.

At this point it is necessary to point out why such modifications and additions have not been made. The largest factor prohibiting the implementation of these is time. Most of the additions have been thought through but we have not had enough time to implement them. A smaller factor, but one still present, is that some modifications are simply additions that do not add anything to the underlying principles of VRPhysicsEnvironment. For example, adding a balloon object is relatively simple but does not change VRPhysicsEnvironment much at all.

Having foreseen that time would be an issue in implementing some of these additions and improvements, we believe we have designed a system that would make these additions fairly easy – we strive to keep the design as extensible as possible.

The next chapter deals with some of the problems which underlie VRPhysicsEnvironment and how to go about solving them.

The next chapter summarises the project as a whole.

12.

13. Conclusion

We show how physical modelling is an application that is beginning to be explored for diverse uses. We also state that this modelling, when done in virtual reality to immerse the user, is computationally expensive. Some systems are graphically rich but not correct in their modelling while others are exact but not visually appealing.

We show how we wish to create an environment, VRPhysicsEnvironment, that is at once graphically rich and correct. We begin with a framework of virtual objects. True to object oriented design principles, we create a base class, VRPhysicsEntity, defining a number of specific methods that each object then inherits.

These methods include methods that dictate its movement in the environment. Dynamics is the method which resolves the forces acting on the object. Kinematics then extrapolates the change in position from frame to frame from the object’s velocity, the change in velocity from its acceleration and change in acceleration from the net force acting on it.

Object interactions are defined in terms of Get and Give methods. These methods are used to ‘exchange’ values such as force and momentum. This model of interactions is chosen because it lends itself to a generic collision handling algorithm.

The design of VRPhysicsEnvironment is a forward looking design – it keeps in mind that extensibility of such a system is not usually easy. It therefore focuses on the most difficult part of the system to extend – collision handling – and seeks to make this algorithm as general as possible so that it becomes independent of extensions to the system. We present the design and implementation of a generic collision handling algorithm and how the objects use this.

Closely related to collision handling is of course collision detection. We therefore show current research in this area and select a current package and algorithm in order to detect collisions in our environment.

We show how we use oriented bounding boxes (and why we choose these instead of bounding spheres) to detect collisions using the idea of separating axes. We expound on the theoretical principles as well as how these were implemented.

This framework means little if there is no application. For this purpose we create two applications – vrphysicsapp and vrttapp. vrphysicsapp is used to show the objects we created in motion – the user positions them via a .def file and watches the simulation to completion. However, in vrttapp the user is able to control two additional objects we add to the system – a hand and a bat – in order to allow them to play virtual table tennis.

Having seen the environment in operation, we are able to determine some of the major problems with the system. The most fundamental of these is the inability of the system to deal with discrete time intervals – especially when these intervals are large. We then suggest some approaches to solving this problem – from a change in the flow of events in CoRgi to more rigorous approaches of calculating the events that occur in the time interval.

We also show how to extend the system - from adding objects to improving collision detection and even adding a user interface onto the simulation to allow the user more control of the simulation and greater ease in placing the objects prior to a simulation.

1.Future Work

We discuss in detail in the chapter 9 the additions and modifications we foresee VRPhysicsEnvironment undergoing in the future – from adding a graphical user interface for positioning the objects to improving collision detection and adding more physical variables to the simulation.

Overall we believe that VRPhysicsEnvironment is a good basis for future work in the area of physical modelling – its design is simple and will enable future researchers to concentrate more on physical modelling than on programming.


14.List of Figures

Figure 1 - Two spheres, A and B, can easily be tested for intersection. 15

Figure 2 - Two AABB's, C and D, must overlap along both axes to intersect. 16

Figure 3 - Two AABB's which overlap along only one axis do not intersect. 16

Figure 4 - Representing an OBB. 17

Figure 5 - 2D representation of a round object and a rectangular object enclosed by the opposite bounding volume. 31

Figure 6 - L is a separating axis for A and B since the projected intervals are disjoint. 34

Figure 7 - Diagrammatic view of the flow of events in CoRgi. 38

Figure 8 - The design philosophy of VRPhysicsEntities. 47

Figure 9 - Extract of the CorRgi object hierarchy tree. 49

Figure 10 - The VRPhysicsEntity class diagram. 49

Figure 11 - The table tennis application. 62

Figure 12 - Three frames of a bat-and-ball collision showing penetration of the ball into the bat. 66

Figure 13 - The ball penetrates into the wall. 79

Figure 14 - The geometric approach - a bounding box is constructed around the ball in position 1 and 3. 80

Figure 15 - An improved flow of events. 81





15.

16.References

[] Internet.Com Webopedia online encyclopaedia. http://webopedia.internet.com/Multimedia/Virtual_Reality/virtual_reality.html


[2] Eric Larsen, Stefan Gottschalk, Ming C. Lin and Dinesh Manocha. Fast Proximity Queries with Swept Sphere Volumes. Technical Report TR99-018, Department of Computer Science, UNC Chapel Hill, page 3.


[3] S. Gottschalk, M.C. Lin and D. Manocha. OBBTree: A Hierarchical Structure for Rapid Interference Detection. In Computer Graphics (SIGGRAPH ‘96 Proceedings), August 1996


[4] Stefan Gottschalk. RAPID home page. http://www.cs.unc.edu/~geom/OBB/OBBT.html (Version 2.01 link). 1997.


[5] Stefan Gottschalk. RAPID home page. http://www.cs.unc.edu/~geom/SSV/rapid.html. 1997.


[6] Stefan Gottschalk. RAPID home page. http://www.cs.unc.edu/~geom/SSV/features.html. 1997.


[7] Jonathan D. Cohen, Ming C. Lin, Dinesh Manocha and Madhav K. Ponamgi. I-Collide: An Interactive and Exact Collision Detection System for Large-Scale Environements. Department of Computer Science, UNC Chapel Hill


[8] Stefan Gottschalk. RAPID home page. http://www.cs.unc.edu/~geom/collide_packages.html (under I-Collide heading). 1997.


[9] Thomas C. Hudson, Ming C. Lin, Jonathan Cohen, Stefan Gottschalk and Dinesh Manocha. V-Collide: Accelerated Collision Detection for VRML. Department of Computer Science, UNC Chapel Hill


[0] Stefan Gottschalk. RAPID home page. http://www.cs.unc.edu/~geom/collision_code.html (under V-Collide heading). 1997.


[1] G. van den Bergen. SOLID home page. http://www.win.tue.nl/cs/tt/gino/solid/. 1999.


[2] G. van den Bergen. Efficient Collision Detection of Complex Deformable Models using AABB Trees. Journal of Graphics Tools, 2(4):1-13 (1997).


[3] Hartmut Keller, Horst Stolz, Andreas Ziegler, Thomas Braunl. Virtual Mechanics – Simulation and Animation of Rigid Body Systems. Computer Vision Group, University of Stuttgart, 1996


[4] James Cremer, George Vanecek. Isaac: Building Simulations for Virtual Environments. Computer Science Department, University of Iowa, 1997


[5] C. Lubich, U. Nowak, U. Pole and C. Engstler. MEXX – Numerical Software for the Integration of Constrained Mechanical Systems. Technical Report SC-92-12, Konrad Zuse Zentrum fur Informationstechnik, Berlin, (1992)


[6] Official Publication of the National Aeronautics and Space Administration September 1992: Vol 16 No. 9


[7] Michael Rorke, Shaun Bangay, Peter Wentworth. Virtual Reality Interaction Techniques. Computer Science Department, Rhodes University,1998


[8] D. Norman. The Design of Everyday Things. Doubleday, New York, 1990


[9] Doug A. Bowman and Larry F. Hodges. User Interface Constraints for Immersive Virtual Environment Applications. Technical Report TR95-26, Graphics, Visualisation and Usability Centre, Georgia Institute of Technology, 1995

17.

18.Appendices

1.GetAllCollision Method (section 3.5.2)

current = GetTheFirstObject();

while (current != INVALIDOBJECTID)

{

currbox = GetBoundingBox (current);

currvals = getObjectAttributes (current);

currpos = currvals->Position;

curr_at = GetBoxPosition (current);

currmov = current->GetIsMovable();

next = GetNextThingAfter(current);

while (next != INVALIDOBJECTID)

{

nextmov = next->GetIsMovable();

if (!(nextmov == 0 && currmov == 0))

{

nextbox = GetBoundingBox (next);

nextvals = getObjectAttributes (next);

next_at = GetBoxPosition(next);

diff = next_at - curr_at;

Rotq = next.orientation / curr.orientation;

Rotq.normalise();

Rotq.RotationMatrix(R);

currbox->GetRadii (aradii);

nextbox->GetRadii (bradii);

GetT (diff, T);

colldetected = obb_disjoint(R, T, aradii, bradii);

if (colldetected == 0)

{

if (next != oldnext)

{

result = 1;

oldnext = next;

DealWithCollision (current, next);

break;

}

}

}

next = GetNextThingAfter (next);

}

current = GetNextThingAfter (current);

}

2. ThreadRoutine (section 4.2.1)

  1. tdiff = lasttime.interval (); // read the timer

  2. lasttime.mark(); // rerset the timer

  3. Dynamics (tdiff);

  4. Kinematics (tdiff);

  5. SetAbsoluteForce (me, 0);

3. Dynamics (section 4.3.1)

  1. if (m > 0.0) // check if object has mass

  2. {

  3. gravf = G * Vector3D (0.0, -1.0, 0.0);

  4. SetForce (me, gravf); // add gravf to me

  5. }

4. Kinematics (section 4.4.1)

  1. GetPosition (me, p); // get my initial position

  2. GetVelocity (me, v); // get my initial velocity

  3. GetAcceleration (me, a); // get my initial accelration

  4. GetMass (me, m); // get my mass

  5. GetForce1 (me, f); // get the force acting on me

  6. newp = (v * tscale); // calculate how much I must move

  7. newv = a * tscale; // calculate my new velocity

  8. if (m > 0.0) // if I have mass

  9. {

  10. newa = ((1.0 / m) * f) - a; // calculate my new acceleration

  11. }

  12. else // I don’t have mass

  13. {

  14. newa = Vector3D (0.0, 0.0, 0.0); // zero my acceleration

  15. }

  16. SetPosition (me, newp); // set my new position

  17. SetVelocity (me, newv); // set my new velocity

  18. SetAcceleration (me, newa); // set my new acceleration

5. DealWithCollision (section 6.2)

  1. void DealWithCollision (objectID first, objectID second)

  2. {

  3. entity1 = FindPhysicsEntity(first); // get a pointer to 1

  4. entity2 = FindPhysicsEntity(second); // get a pointer to 2

  5. type1 = entity1->GetPhysType(); // get type of 1

  6. type2 = entity2->GetPhysType(); // get type of 2

  7. GetPosition (first, ent1pos); // get position of 1

  8. GetPosition (second, ent2pos); // get position of 2

  9. diff = ent2pos - ent1pos; // get vector from 1 to 2

  10. diff = diff.normalize(); // normalise this vector

  11. entity1->GetMomentum (m1, v1); // get momentum of 1

  12. entity2->GetMomentum (m2, v2); // get momentum of 2

  13. entity1->GetPhysForce (f1); // get force to apply from 1

  14. entity2->GetPhysForce (f2); // get force to apply from 2

  15. entity1->GivePhysForce (f2); // apply force from 1 to 2

  16. entity2->GivePhysForce (f1); // apply force from 2 to 1

  17. if (entity1->stationary()) // object 1 is stationary

  18. {

  19. n = entity1->GetNormal(); // get normal of 1

  20. entity2->GiveMomentum (type1, m1, v1, n); // reflect 2’s v in plane of 1

  21. }

  22. else

  23. {

  24. if (entity1->GetCollType() == ROUNDCOLL &&

  25. entity2->GetCollType() == ROUNDCOLL)

  26. { // handle round-round collision

  27. entity2->GiveMomentum (type1, m1, v1, diff); // swap momentum

  28. }

  29. else // other collisions

  30. entity2->GiveMomentum (type1, m1, v1); // swap momentum

  31. }

  32. if (entity2->stationary()) // object 2 is stationary

  33. {

  34. n = entity2->GetNormal(); // normal of surface of 2

  35. entity1->GiveMomentum (type2, m2, v2, n); // reflect 1’s v in plane of 2

  36. }

  37. else

  38. {

  39. if (entity1->GetCollType() == ROUNDCOLL &&

  40. entity2->GetCollType() == ROUNDCOLL)

  41. { // handle round-round collision

  42. entity1->GiveMomentum (type2, m2, v2, diff);

  43. }

  44. else // other collisions

  45. entity1->GiveMomentum (type2, m2, v2); // swap momentum

  46. }

  47. }

6. VRPhysicsConveyor::GetMomentum (section 6.3.3)

  1. v = Vector3D (1.0, 0.0, 0.0); // vector along positive x-axis

  2. GetOrientation (me, q); // put my orientation in q

  3. v = q * v; // v now points along my length – returned to caller

  4. m = 10.0; // a number for my mass – returned to caller

7. VRPhysicsBall::GiveMomentum (section 6.4.1)

  1. GetMass (me, m); // get my mass

  2. GetVelocity (me, currv); // get my current velocity

  3. v = currv; // store it

  4. if (t == O_wall) // if I collided with a wall

  5. {

  6. v.ReflectVectorInPlane (n); // reflect my velocity in wall’s plane

  7. }

  8. else

  9. {

  10. if (m2 != -1) // not an infinite mass

  11. {

  12. factor1 = (m - m2) / (m + m2);

  13. factor2 = (2 * m2) / (m + m2);

  14. v = factor1 * currv;

  15. v2 = factor2 * v2;

  16. v = v + v2; // final velocity I need to be at after collision

  17. if (n != zerovector) // if I hit a conveyor

  18. v.ReflectVectorInPlane (n); // reflect my velocity

  19. }

  20. v = v - currv; // work out difference

  21. SetVelocity (me, v); // add the difference

  22. if (t == O_ttbat) // if I hit a bat

  23. {

  24. Scale3D mesc;

  25. GetScale (me, mesc);

  26. Scale3D sc (0.0025, 0.0025, 0.0025); // empirical value

  27. sc = sc * mesc;

  28. Vector3D newp = v * sc; // scale offset

  29. SetPosition (me, newp); // set my position off a little

  30. }

8. VRVelocityPolhemusInputActor::HandleData (section 7.2)

  1. tdiff = lasttime.interval (); // read time interval passed

  2. if (tdiff == 0.0) tdiff = 0.00000001; // prevent div by 0

  3. lasttime.mark (); // reset timer

  4. vc = VRInputCoordinates (somedata[1]); // get info from device server

  5. diff = vc->Position - lastpos; // distance moved

  6. if (diff.length() > 0.01) // if greater than threshold value

  7. {

  8. diff = diff / tscale; // divide by time to get velocity

  9. lastpos = vc->Position; // reset last position

  10. Parent->SetInputVelocity (diff); // send velocity to parent

  11. }

  12. Parent->SetInputCoordinates(vc); // send position and orientation to parent

9. VRPhysicsBall::GiveMomentum (section 7.4)

  1. if (t == O_ttbat) // if the ball has collided with the bat

  2. {

  3. GetScale (me, mesc); // get my scale

  4. sc (0.0025, 0.0025, 0.0025); // empirical scale factor

  5. sc = sc * mesc; // scale factor by my scale

  6. newp = v * sc; // distance to be moved

  7. SetPosition (me, newp); // move that distance

  8. }




19.Disc Appendix

The CD accompanying this dissertation contains the entire CoRgi repository. In order to install and run, copy the entire /pilot directory onto a hard-disk. Type buildconf from the pilot directory on the hard-disk and follow the rest of the instructions.

Once the code has compiled and linked, change to the pilot/src/apps directory. Type make vrphysicsapp. The reader will gain most benefit from running vrphysicsapp since no input is required. The physworld.def file can be modifies to place different objects in different places.

The /movies directory of the disc contains two movie files – physics and tablet. These are similar simulations to the ones used in chapter 8.



  1. 1 Polytopes are 4 dimensional shapes – polygon is to polyhedron what polyhedron is to polytope.

  1. 2 Orientation is the term used to describe the rotation of an object about its centre.

  1. 3 This example is done in two dimensions (2D) for simplicity, but can easily be extended to 3D.

  1. 1 Once again the example is discussed in 2D but can easily be extended to 3D.

  1. 1 This is a set of object-axes – that is a set of axes particular to the box. When the box undergoes the transformations rotate and translate, the box axes do too. Hence they are independent of the orientation of the enclosed object.

  1. 1 Convexity is a topological feature that constrains the shape of an object – for an object to be convex then a line between any two vertices of the object may not intersect the surface of the object.

  1. 1 World space is the object’s (or polygon’s) position relative to the world co-ordinates as opposed to the object’s co-ordinates.

  1. 1 The makers of AERO used POVRAY.

  1. 2 A cuboid is a right parallelepiped whose length, height and width may differ but all surfaces have right angles at their corners.

  1. 3 A rigid body is an object that does not ever change its shape.

  1. 4 Objects that touch apply force over large areas, as opposed to impulse forces which are approximated to a contact point.

  1. 5 A quaternion is a compact way to represent an arbitrary rotation about an arbitrary axis.

  1. 1 Sphere with sphere, sphere with cuboid, sphere with cylinder and so forth.

  1. 2

  1. 1 An inertia matrix is a way of describing the distribution of mass within an object.

  1. 1 Values that are in bold type are vectors.

  1. 1 Note that the operation denoted is a vector cross product, not a scalar multiply.

  1. 1 For example, if we have three objects, A, B and C then the interactions are AA, BB, CC, AB, AC and BC – a total of six.

  1. 1 This method is different to GetPhysForce (see section GetPhysForce).

1[] http://webopedia.internet.com/Multimedia/Virtual_Reality/virtual_reality.html


2[] Eric Larsen, Stefan Gottschalk, Ming C. Lin and Dinesh Manocha. Fast Proximity Queries with Swept Sphere Volumes. Technical Report TR99-018, Department of Computer Science, UNC Chapel Hill, page 3.


3[] S. Gottschalk, M.C. Lin and D. Manocha. OBBTree: A Hierarchical Structure for Rapid Interference Detection. In Computer Graphics (SIGGRAPH ‘96 Proceedings), August 1996


4[] http://www.cs.unc.edu/~geom/OBB/OBBT.html (Version 2.01 link)


5[] http://www.cs.unc.edu/~geom/SSV/rapid.html


6[] http://www.cs.unc.edu/~geom/SSV/features.html


7[] Jonathan D. Cohen, Ming C. Lin, Dinesh Manocha and Madhav K. Ponamgi. I-Collide: An Interactive and Exact Collision Detection System for Large-Scale Environements. Department of Computer Science, UNC Chapel Hill


8[] http://www.cs.unc.edu/~geom/collide_packages.html (under I-Collide heading)


9[] Thomas C. Hudson, Ming C. Lin, Jonathan Cohen, Stefan Gottschalk and Dinesh Manocha. V-Collide: Accelerated Collision Detection for VRML. Department of Computer Science, UNC Chapel Hill


10[] http://www.cs.unc.edu/~geom/collision_code.html (under V-Collide heading)


11[] http://www.win.tue.nl/cs/tt/gino/solid/


12[] G. van den Bergen. Efficient Collision Detection of Complex Deformable Models using AABB Trees. Journal of Graphics Tools, 2(4):1-13 (1997).


13[] Hartmut Keller, Horst Stolz, Andreas Ziegler, Thomas Braunl. Virtual Mechanics – Simulation and Animation of Rigid Body Systems. Computer Vision Group, University of Stuttgart, 1996


14[] James Cremer, George Vanecek. Isaac: Building Simulations for Virtual Environments. Computer Science Department, University of Iowa, 1997


15[] C. Lubich, U. Nowak, U. Pole and C. Engstler. MEXX – Numerical Software for the Integration of Constrained Mechanical Systems. Technical Report SC-92-12, Konrad Zuse Zentrum fur Informationstechnik, Berlin, (1992)


16[] Official Publication of the National Aeronautics and Space Administration September 1992: Vol 16 No. 9


17[] Michael Rorke, Shaun Bangay, Peter Wentworth. Virtual Reality Interaction Techniques. Computer Science Department, Rhodes University,1998


18[] D. Norman. The Design of Everyday Things. Doubleday, New York, 1990


19[] Doug A. Bowman and Larry F. Hodges. User Interface Constraints for Immersive Virtual Environment Applications. Technical Report TR95-26, Graphics, Visualisation and Usability Centre, Georgia Institute of Technology, 1995