Electronic Thesis and Dissertation Repository

Thesis Format



Master of Engineering Science


Electrical and Computer Engineering

Collaborative Specialization

Planetary Science and Exploration


McIsaac, Kenneth


Rover navigation on planetary surfaces currently uses a method called blind drive which requires a navigation goal as input from operators on Earth and uses camera images to autonomously detect obstacles. Images can be affected by lighting conditions, are not highly accurate from far distances, and will not work in the dark; these factors negatively impact the autonomous capabilities of rovers. By improving a rover's ability to autonomously detect obstacles, the capabilities of rovers in future missions would improve; for example, enabling exploration of permanently shadowed regions, and allowing faster driving speeds and farther travel distances. This thesis demonstrates how Lidar point clouds can be used to autonomously and efficiently segment planetary terrain to identify obstacles for safe rover navigation. Two Lidar datasets which represent planetary environments containing rock obstacles and sandy terrain were used to train a neural network to perform semantic segmentation. The neural network was based on the RandLA-Net architecture that was designed to efficiently perform semantic segmentation on point clouds using a random sampling algorithm without modifying the point cloud structure. Methods to handle the class imbalance of the datasets were explored to enable the model to learn the minority class and to optimize the model’s performance. The model achieved a recall score of 94.46% and precision score of 84.93% at a frame rate of 0.6238 seconds/point cloud on an Intel Xeon E5-2665 CPU, indicating that it is possible to use Lidar point clouds to perform semantic segmentation on-board planetary rovers with similar compute capabilities.

Summary for Lay Audience

Planetary rovers are designed to explore the solid surfaces of planets such as Mars, and other planetary masses such as the Moon. Because of the distance between Earth and other planets and planetary masses, it can take a long time to communicate with a rover from Earth, so the rover cannot be driven by operators on Earth. Instead, operators on Earth give the rover a location to drive to, and the rover must detect and avoid harmful obstacles on its own, such as rocks, cracks, or cliffs. The current way of locating obstacles is done by using camera images: the rover finds the location of the obstacles in the images and avoids them while it drives. Camera images can be affected by lighting conditions, are not very accurate from far distances, and will not work in the dark. This research proposes using a Light Detection and Ranging (Lidar) sensor rather than a camera to find obstacles. Lidar sensors create point clouds by measuring the distance to their surroundings using laser beams, so they are not affected by lighting conditions, do not require a light source, and are more accurate from distances than cameras. One of the reasons that Lidars have not been used on rovers in the past is because they need a lot of computer power to find obstacles within the point clouds that Lidars make; however, new techniques have been created to reduce the amount of computer power needed to find obstacles in point clouds making it more realistic to use a Lidar on a rover to find obstacles. Datasets that contain rocks and sandy ground to represent planetary surfaces were used to train machine learning models to find the rock obstacles in the planetary scenes. Overall, the machine learning model was able to correctly separate the rock obstacles from the ground and implies that Lidars could be used on future rover missions to improve the rover's ability to detect obstacles.

Included in

Robotics Commons