Master of Engineering Science
Mechanical and Materials Engineering
Naish, Michael D.
Whether someone is born with a missing limb or an amputation occurs later in life, living with this disability can be extremely challenging. The robotic prosthetic devices available today are capable of giving users more functionality, but the methods available to control these prostheses restrict their use to simple actions, and are part of the reason why users often reject prosthetic technologies. Using multiple myography modalities has been a promising approach to address these control limitations; however, only two myography modalities have been rigorously tested so far, and while the results have shown improvements, they have not been robust enough for out-of-lab use. In this work, a novel multi-modal device that allows data to be collected from three myography modalities was created. Force myography (FMG), surface electromyography (sEMG), and inertial measurement unit (IMU) sensors were integrated into a wearable armband and used to collect signal data while subjects performed gestures important for the activities of daily living. An established machine learning algorithm was used to decipher the signals to predict the user's intent/gesture being held, which could be used to control a prosthetic device. Using all three modalities provided statistically-significant improvements over most other modality combinations, as it provided the most accurate and consistent classification results. This work provides justification for using three sensing modalities and future work is suggested to explore this modality combination to decipher more complex actions and tasks with more sophisticated pattern recognition algorithms.
Summary for Lay Audience
Living with a lost limb can be extremely challenging as the activities of daily living become more difficult. Robotic prosthetic devices have been developed to help amputees with these activities to improve their quality of life. The available robotic prosthetic devices are capable of giving the user more functionality, but the methods available to control these prostheses restrict their use to simple actions. Furthermore, the limitations of the available controls usually lead to rejection of the prostheses that they are attached to because they are unreliable and lead to user frustration. Thus, it is very important to develop a method of prosthesis control that is reliable, simple, and intuitive to use.
The goal of this project is to create a system that can provide natural, reliable and intuitive control of modern prosthetics for at-the-forearm amputees. This research aims to improve the ability of a prosthetic arm to distinguish between several complex gestures for improved control.
For this work, a device that detects muscle information from electrical activity, physical changes, and motion changes of the arm was designed. The device allows for muscle information to be collected from the three sensor types while a participant performs hand/wrist gestures in arm positions important for activities of daily living. After the data was collected, it was analyzed using pattern recognition methods determine whether using three sensors is beneficial for finding patterns in the data associated with muscle activity.
Gharibo, Jason S., "Data and Sensor Fusion Using FMG, sEMG and IMU Sensors for Upper Limb Prosthesis Control" (2021). Electronic Thesis and Dissertation Repository. 8100.