Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Doctor of Philosophy

Program

Electrical and Computer Engineering

Collaborative Specialization

Planetary Science and Exploration

Supervisor

McIsaac, Ken

2nd Supervisor

Osinski, Gordon

Co-Supervisor

Abstract

The rise in the number of robotic missions to space is paving the way for the use of artificial intelligence and machine learning in the autonomy and augmentation of rover operations. For one, more rovers mean more images, and more images mean more data bandwidth required for downlinking as well as more mental bandwidth for analyzing the images. On the other hand, light-weight, low-powered microrover platforms are being developed to accommodate the drive for planetary exploration. As a result of the mass and power constraints, these microrover platforms will not carry typical navigational instruments like a stereocamera or a laser rangerfinder, relying instead on a single, monocular camera.

The first project in this thesis explores the realm of novelty detection where the goal is to find `new' and `interesting' features such that instead of sending a whole set of images, the algorithm could simply flag any image that contains novel features to prioritize its downlink. This form of data triage allows the science team to redirect its attention to objects that could be of high science value. For this project, a combination of a Convolutional Neural Network (CNN) with a K-means algorithm as a tool for novelty detection is introduced. By leveraging the powerful feature extraction capabilities of a CNN, typical images could be tightly clustered into the number of expected entities within the rover's environment. The distance between the extracted feature vector and the closest cluster centroid is then defined to be its novelty score. As such, a novel image will have a significantly higher distance to the cluster centroids compared to the typical images. This algorithm was trained on images obtained from the Canadian Space Agency's Analogue Terrain Facility and was shown to be effective in capturing the majority of the novel images within the dataset.

The second project in this thesis aims to augment microrover platforms that are lacking the instruments for distance measurements. Particularly, this project explores the application of monocular depth estimation where the goal is to estimate a depth map from a monocular image. This problem is inherently difficult to solve given that recovering depth from a 2D image is a mathematically ill-posed problem, compounded by the fact that the lunar environment is a dull, colourless landscape. To solve his problem, a dataset of images and their corresponding ground truth depth maps have been taken at Mission Control Space Service's Indoor Analogue Terrain. An autoencoder was then trained to take in the image and output an estimated depth map. The results of this project show that the model is not reliable at gauging the distances of slopes and objects near the horizon. However, the generated depth maps are reliable in the short to mid range, where the distances are most relevant for remote rover operations.

Summary for Lay Audience

Over the last few years, more robots are being sent into space and into different planetary bodies. Most recently demonstrated by the success of the Mars2020 mission, autonomy is becoming more prevalent in these robotic platforms. For example, the Sky Crane landing maneuver autonomously selected a landing site that would minimize the risks with the use of computer vision techniques. In addition, the Perseverance rover has autonomous rock targeting capabilities that enable it to perform more science experiments with its instruments.

This research follows along the thread of artificial intelligence and machine learning in planetary exploration applications. More specifically, the first project describes an algorithm that utilizes the imagery that the rovers take to find `new’ regions of interest in their environment. Finding these `new’ features will allow the operators to redirect their focus on targets that could potentially deliver more interesting scientific results. The algorithm developed in this research has shown that it can reliably pick out images of new rocks in its environment, allowing the rover to triage its communications and prioritize sending potentially interesting pictures.

The second project in this research focuses on microrover platforms (lightweight, low-powered rovers) that do not have the capability to measure how far away objects are. In past missions, stereocameras have been used for this purpose to aid in hazard navigation and science targeting. It might be perplexing to think how something like a navigational equipment could be glossed over during the design and development of these rovers but some microrover platforms are so space and power limited such that scientific payloads are prioritized instead. Nevertheless, these rovers will almost always have a single camera so that the operators can at least see the rover’s environment. This project leverages the images taken from the camera to estimate a depth map using deep learning algorithms. Unsurprisingly, the model is unreliable in estimating the distances of far objects sitting on a slope as even humans will have difficulty in this task without enough context. Nonetheless, this project has shown that the estimated depth maps are an acceptable substitute in the absence of stereocameras and laser range finders.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS