Thesis Format
Monograph
Degree
Master of Engineering Science
Program
Electrical and Computer Engineering
Supervisor
Naish, Michael D.
Abstract
Upper-limb prosthetics are typically driven exclusively by biological signals, mainly electromyography (EMG), where electrodes are placed on the residual part of an amputated limb. In this approach, amputees must control each arm joint iteratively, in a proportional manner. Research has shown that sequential control of prosthetics usually imposes a cognitive burden on amputees, leading to high abandonment rates. This thesis presents a control system for upper-limb prosthetics, leveraging a computer vision module capable of simultaneously predicting objects in a scene, their segmentation mask, and a ranked list of the optimal grasping locations. The proposed system shares control with an amputee, allowing them to only play a supervisory role, and offloads most of the work required to configure the wrist to the computer vision module. The overall system is evaluated in an object pick up, transport, and drop off experiment in realistic, cluttered environments. Results show that the proposed system enables the subject to successfully complete 95% of the trials, and confirms the benefit of having the user in the control loop.
Summary for Lay Audience
Losing a limb is often a devastating event that prevents amputees from leading normal, independent lives. Typically, prosthetic hands are controlled by electric signals measured by sensors placed on the surface of the residual part of the limb. Each signal measured from the body is used to drive an individual motor on the artificial limb. The closer the amputation to the shoulder, the larger the number of joints that must be sequentially controlled. Research has shown that this unnatural way of control often leads to a mental burden; one of the top reasons for the high rates of prosthesis abandonment among amputees. Consequently, many researchers have studied the feasibility of using computer vision and artificial intelligence to aid in the control of hand and arm prosthetics. This thesis builds upon the ideas covered in literature to develop a control system to aid in the control of upper-limb prosthetics. Using a head-mounted camera capturing a video of the environment, the proposed approach analyzes the scene to detect its individual objects, their outline, and the best way to pick them up. The amputee uses eye trackers attached to the headset to select the object they want to interact with. The selected object and its corresponding pick up points are displayed to the amputee via augmented reality glasses. Once the system output is confirmed, the information is communicated to the arm, allowing the wrist to automatically orient itself, and configuring the hand to pick up the object. Instead of controlling each arm joint in order, this approach only requires the amputee to select the object, confirm the program output, and close their hands to complete the grasping task. An experiment is conducted where a participant is asked to pick up, transport, and drop off objects from a table to a basket. The environments are setup to validate the performance of the system in cluttered scenes with many objects. Results show that the proposed control system enables the participant to successfully complete 95% of the trials, confirming the benefit of combining computer vision, artificial intelligence, and shared control between the amputee and computer.
Recommended Citation
Kamel, Mena S.A., "Visual Cues For Semi-autonomous Control Of Transradial Prosthetics" (2021). Electronic Thesis and Dissertation Repository. 8042.
https://ir.lib.uwo.ca/etd/8042
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.