Thesis Format
Monograph
Degree
Doctor of Philosophy
Program
Electrical and Computer Engineering
Supervisor
Grolinger, Katarina
Abstract
Unmanned Aerial Vehicles (UAVs) are instrumental in various tasks, including package delivery, disaster response, and surveillance. Their varied applications highlight the need for advanced navigation techniques, with Deep Reinforcement Learning (DRL) being a key approach in enhancing UAV autonomy. The challenges in UAV navigation using DRL span three key areas: comprehending DRL applications in UAV navigation, navigation frameworks accommodating the requirements of autonomous UAV navigation, and adaptive DRL algorithms handling high-dimensional inputs and temporal dependencies inherent in UAV navigation.
In response to these challenges, this thesis explores challenges associated with DRL in autonomous UAV navigation in complex 3D environments. The investigation accentuates understanding algorithmic properties and navigation tasks to leverage DRL methodologies in UAV navigation. The DRL algorithms for autonomous UAV navigation are investigated and classified. The comprehensive review includes over fifty Reinforcement Learning (RL) algorithms, their traits, relations, and classifications based on the application environment and UAV navigation. Moreover, a process for selecting the appropriate DRL algorithm based on the navigation environment and algorithmic needs is presented.
Next, the thesis presents VizNav, a modular RL-based navigation framework that addresses the current challenges in RL-based autonomous UAV navigation, leveraging off-policy RL algorithm and employing Prioritized Experience Replay (PER) for improved UAV navigation results and algorithm convergence. Additionally, VizNav uses Depth Map Images (DMI) to provide the agent with a more accurate and comprehensive depth perspective, enhancing UAV navigation. VizNav experimental results reveal enhanced navigation using TD3 supported by PER and DMI while maintaining adaptability using different algorithms and environments.
Finally, this thesis proposes Agile Deep Q-Network (AG-DQN), a novel DRL algorithm to manage high-dimensional inputs and temporal dependencies, employing a dynamic multi-glimpse strategy and advanced temporal processing to selectively and dynamically extract salient features for improved decision-making. AG-DQN outperforms other state-of-the-art methods like DRQN and DARQN in complex UAV navigation tasks, using only 32% of the total image pixels (environment state). Overall, the thesis contributes to developing fully autonomous UAVs capable of navigating various scenarios, paving the way for their broadened applications.
Summary for Lay Audience
Drones are increasingly being used in our everyday life for various tasks like delivering packages, helping in emergencies, and monitoring situations. But for drones to perform these tasks effectively, they need advanced techniques to navigate or find their way around obstacles.
This study uses a powerful tool known as Deep Reinforcement Learning (DRL), a type of artificial intelligence, to improve how drones navigate. The challenges in this area involve understanding how DRL can be used in drone navigation, creating a framework that suits drone navigation needs, and finding a DRL method that can handle complex inputs and changes over time, which are common in drone navigation tasks.
To tackle these challenges, this study involves understanding and classifying over fifty DRL techniques that could be applied to drone navigation. Furthermore, it introduces a step-by-step process to choose the most suitable learning method based on the specific navigation environment and requirements.
The study introduces VizNav, a framework using a specific type of DRL that can help drones navigate more effectively and adapt to different situations. VizNav uses special images that provide a better depth perspective, improving drone navigation.
Lastly, the study proposes a new DRL method called Agile Deep Q-Network (AG-DQN), which can handle complex inputs and changes over time. It does this by using a smart strategy that allows it to extract important information without needing to process all the information.
In summary, this thesis explores advanced computer learning techniques to enhance independence and drone navigation in complex navigation tasks. This could broaden the potential uses of drones, making them more reliable and versatile.
Recommended Citation
AlMahamid, Fadi, "Deep Reinforcement Learning for Autonomous Unmanned Aerial Vehicle Navigation" (2023). Electronic Thesis and Dissertation Repository. 9437.
https://ir.lib.uwo.ca/etd/9437
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Included in
Computer and Systems Architecture Commons, Navigation, Guidance, Control and Dynamics Commons, Navigation, Guidance, Control, and Dynamics Commons