Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Doctor of Philosophy

Program

Biomedical Engineering

Supervisor

Carson, Jeffrey

2nd Supervisor

Diop, Mamadou

Abstract

Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements, including motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of an object. Estimation of phase maps is commonly performed with spatial phase retrieval algorithms that use a series of complex data processing stages. Researchers must have advanced data analysis skills to process FPP data due to a lack of availability of simple research-oriented software tools. Chapter 2 describes a comprehensive FPP software tool called PhaseWareTM that allows novice to experienced users to perform pre-processing of fringe patterns, phase retrieval, phase unwrapping, and finally post-processing. The sequential process of acquiring fringe patterns from an object is necessary to sample the surface densely enough to accurately estimate surface profiles. Sequential fringe acquisition performs poorly if the object is in motion between fringe projections. To overcome this limitation, we developed a novel method of FPP called multispectral fringe projection profilometry (MFPP), where multiple fringe patterns are composited into a multispectral illumination pattern and a single multispectral camera is used to capture the frame. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. Although the first attempt at MFPP significantly improved acquisition speed, it did not fully satisfy the condition for temporal phase retrieval, which requires at least three phase-shifted fringe patterns to characterize a surface. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with spectrally matched multispectral imaging, the refined MFPP method resulted in complete data for temporal phase retrieval using only a single camera exposure, thereby maintaining the high sampling speed for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery.

Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements such as motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of the object. To estimate surface profiles accurately, sequential acquisition of fringe patterns is required; however, sequential fringe projection and acquisition perform poorly if the object is in motion during the projection. To overcome this limitation, we developed a novel method of FPP maned multispectral fringe projection profilometry (MFPP). The proposed method provides multispectral illumination patterns using a multispectral filter array (MFA) to generate multiple fringe patterns from a single illumination and capture the composite pattern using a single multispectral camera. Therefore, a single camera acquisition can provide multiple fringe patterns, and this directly increases the speed of imaging by a factor equal to the number of fringe patterns included in the composite pattern. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. The first attempt at MFPP significantly improved acquisition speed by a factor of eight by providing eight different fringe patterns in four different directions, which permits the system to detect more morphological details. However, the phase retrieval algorithm used in this method was based on the spatial phase stepping process that had a few limitations, including high sensitive to the quality of the fringe patterns and being a global process, as it spreads the effect of the noisy pixels across the entire result. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with a spectrally matched multispectral camera, the refined MFPP method provided the needed data for the temporal phase retrieval algorithm using only a single camera exposure. Thus, it delivers high accuracy and pixel-wise measurement (thanks to the temporal phase stepping algorithms) while maintaining a high sampling rate for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery.

Summary for Lay Audience

Objects in our surroundings can be characterized as three-dimensional (3D) using the concepts of width, height, and depth. It is essential to quantify these dimensions for any system that attempts to represent our world in a realistic manner. Conventional cameras capture objects in two dimensions and generally do not sense depth. In contrast, a 3D camera lights the object with a beam of light shaped according to a known pattern and then infers the depth of the object from the changes in the light pattern. Generally, 3D cameras need to capture multiple camera snapshots to get enough data for a single 3D image. Therefore, they are not suitable for measuring fast-moving objects. During my Ph.D. research, I developed a new type of pattern generator that provides 3D cameras the ability to capture enough data for a 3D image from a single camera snapshot. The pattern generator uses a small filter to project multiple overlapping yet distinctive color patterns onto the object. The 3D camera incorporates a multispectral sensor that separates each distinctive color pattern. With this new technology, I was able to modify a 3D camera and measure fast-moving objects without motion blur. With further development, this new technology can be miniaturized for use in hand-held devices such as smartphones providing users a way to quickly measure the world around them in 3D.

Share

COinS