Sensing the Physical World

Abstract

Capturing three-dimensional data of the surface topography of real objects (3D Scanning) is the basis for modern automated 3D modelling methods. 3D scanning is also the basis of computer vision methods for autonomous agents understanding and acting within real world environments. Single sensor scanning systems produce 2.5D data and require relative motion of the sensor or scene to obtain 3D data.
This research investigates the simultaneous use of multiple, inexpensive 2.5D Kinect(TM) sensors for the purpose of capturing 3D topographic data of a static scene. In particular, the problems of sensor calibration and data registration are investigated to determine appropriate methods for overcoming known sources of error in stereo vision problems. Previously reported interference effects between multiple Structured Light depth sensors are also investigated and found to be mitigated in overlapping point clouds. The results of this research are a system design (including algorithms) for a multi-Kinect 3D scanning system. Validation of this sensor system was performed by scanning mannequin heads, which successfully produced non-occluded point cloud models of the face. These results are significant as they demonstrate a new capacity to capture accurate colour and topographic data for novel applications such as 3D facial recognition.

Participants

Technical Reports

[1] Cameron Starkey. Spatial sound for representing the location of virtual objects. Technical Report Honours Project Report, GIVE group, School of Information Technology, Deakin University, Australia, October 2015. [PDF] [BibTeX]

[2] Philip Kolar. Single target tracking in range-only directional sensor networks. Technical Report Honours Project Report, GIVE group, School of Information Technology, Deakin University, Australia, October 2014. [PDF] [BibTeX]

Images