Doctor of Philosophy
Optical flow computation is one of the oldest and most active research fields in computer vision and image processing. It encompasses the following areas: motion estimation, video compression, object detection and tracking, image dominant plane extraction, movement detection, robot navigation, visual odometry, traffic analysis, and vehicle tracking. Optical flow methods calculate the motion between two image frames. In 2D images, optical flow specifies how far each pixel moves between adjacent frames; in 3D images, it specifies how much each voxel moves between adjacent volumes in the dataset. Since 1980, several algorithms have successfully estimated 2D and 3D optical flow. Notably, scene flow and range flow are special cases of 3D optical flow. Scene flow is the 3D optical flow of pixels on a moving surface. Scene flow uses disparity and disparity gradient maps computed from a stereo sequence and the 2D optical flow of the left and right images in the stereo sequence to compute 3D motion. Range flow is similar to scene flow, but is calculated from depth map sequences or range datasets. There is clear overlap between the algorithms that compute scene flow and range flow. Therefore, we propose new insights that can help range flow algorithms to advance to the next stage. We propose new insights into range flow algorithms by enhancing them to allow large displacements using a hierarchical framework with warping technique. We applied robust statistical formulations to generate robust and dense flow to overcome motion discontinuities and reduce the outliers. Overall, this thesis focuses on the estimation of 2D optical flow and 3D range flow using several algorithms. In addition, we studied depth data gained from different sensors and cameras. These cameras provided RGB-D data that allowed us to compute 3D range flow in two ways: using depth data only, or by combining intensity with depth data to improve the flow. We implemented well-known local approaches LK  and global HS algorithms and recast them in the proposed framework to estimate 2D and 3D range flow . Furthermore, combining local and global algorithm (CLG) proposed by Bruhn et al. [4,5] as well as Brox et al.  method are implemented to estimate 2D optical flow and 3D range flow. We tested and evaluated these implemented approaches both qualitatively and quantitatively in two different motions (translation and divergence) using several real datasets acquired using Kinect V2, ZED camera, and iPhone X (front and rear) Cameras. We found that CLG and Brox methods gave the best results in our datasets using Kinect V2, ZED and front camera in iPhone X sequences.
Summary for Lay Audience
Optical flow can be defined as the estimation of motion in a set of image sequences. It can be computed from 2D data, which are regular images captured by typical cameras, to estimate 2D optical flow. 2D optical flow gives (u,v) motion in the image, which describes how much the object moves in the x-direction (u) and the y-direction (v). However, using 3D data, we can now estimate 3D optical flow (u,v,w). The first two components (u,v) describe the objects motion in x and y-direction, respectively. The third component (w) describes the objects motion in the z-direction. This study shows how to estimate a special kind of 3D flow, which is the motion of the surface of a moving object. This type of flow is a Scene, or Range, flow. This particular flow needs 2D data in addition to depth information from the scene, or image. The depth information calculates the distance of the object from the camera as measured in units such as millimeters. This depth can be extracted using special kinds of cameras and sensors. In this thesis, we estimated 2D optical flow and 3D range flow using six differing classic approaches. These six approaches include: Least Squared (LS), Total Least Squared (TLS), Global Regularization (Global), Indirect Global Regularization (Global in), Combined Local and Global (CLG), and finally the Brox methods. Our research extended some classic algorithms derived from 2D optical flow to produce 3D range flow. We also added some principles and parameters to better estimate both 2D and 3D flow. We tested our approaches with three different datatypes: using intensity images only, using the depth data only, and combining both depth data with intensity. We also generated new data sequences using different cameras and sensors, including Kinect V2, ZED, and iPhone X. We evaluated our approaches using various datasets and generated data, which we subsequently analyzed in both quantitative and qualitative terms.
Noorwali, Seereen, "Range Flow: New Algorithm Design and Quantitative and Qualitative Analysis" (2020). Electronic Thesis and Dissertation Repository. 6991.