Electronic Thesis and Dissertation Repository

Thesis Format

Monograph

Degree

Doctor of Philosophy

Program

Biomedical Engineering

Supervisor

Trejos, Ana Luisa

Abstract

Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices.

One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEG–EMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEG–EMG fusion and to develop a novel control system based on the incorporation of EEG–EMG fusion classifiers.

A dataset of EEG and EMG signals were collected during dynamic elbow flexion–extension motions and used to develop EEG–EMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEG–EMG fusion can classify more indirect tasks.

A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEG–EMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEG–EMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation.

Summary for Lay Audience

Worldwide, many people suffer from medical conditions or injuries that limit their ability to move. This lowers their quality of life, as activities of daily living become a challenge. To help patients regain their ability to move, researchers have developed wearable robotic braces that use built-in motors to help someone move when their muscles are too weak. While early results are promising, these devices require more improvements in the way they are controlled before they can be widely utilized. Operating the wearable robot should feel like natural movement and the device should work reliably, regardless of the task being performed.

To meet these objectives, researchers have developed systems that can utilize brain or muscle activity to know when/how a person wants to move. Using sensors placed on the skin, electrical signals generated by the brain (electroencephalography/EEG) and occurring inside muscles (electromyography/EMG) can be measured to determine that a person is thinking about moving (EEG) and is trying to move (EMG). Typically, wearable robotic devices only use one signal type; however, recent work has shown that combing EEG and EMG (EEG–EMG fusion) can improve accuracy for simple tasks. Further research is required to determine new techniques of integrating EEG–EMG fusion into device control.

Therefore, the goals of this thesis were to investigate methods of using EEG–EMG fusion to determine when/how a person is trying to move and to develop techniques to utilize this information when controlling a wearable robotic brace. To accomplish this, EEG and EMG signals were recorded during elbow motions, and machine learning was used to train/evaluate various EEG–EMG fusion models to detect the weight held during movement. A control system was developed that can modify its calibration settings based on the output of these models, providing the ability to intelligently adapt to weight variations. This work demonstrated that EEG–EMG fusion can successfully detect movement information and it developed a method to utilize this information for adaptable device control. These results begin to address the limitations preventing widespread use of wearable robotic devices, moving towards a future where they routinely help people improve their quality of life.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS