Author

Ahmed Zahran

Abstract

This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field.

Department

Robotics, Control & Smart Systems Program

Degree Name

MS in Robotics, Control and Smart Systems

Graduation Date

2-1-2018

Submission Date

September 2017

First Advisor

Moustafa, Mohamed

Committee Member 1

Rafea, Ahmed

Committee Member 2

Eldawlatly, Seif

Extent

94 p.

Document Type

Master's Thesis

Rights

The author retains all rights with regard to copyright. The author certifies that written permission from the owner(s) of third-party copyrighted matter included in the thesis, dissertation, paper, or record of study has been obtained. The author further certifies that IRB approval has been obtained for this thesis, or that IRB approval is not necessary for this thesis. Insofar as this thesis, dissertation, paper, or record of study is an educational record as defined in the Family Educational Rights and Privacy Act (FERPA) (20 USC 1232g), the author has granted consent to disclosure of it to anyone who requests a copy.

Institutional Review Board (IRB) Approval

Approval has been obtained for this item

Comments

Acknowledgment to HCI Lab, VIS Institute, University of Stuttgart namely Jakob Karlous, Mariam Hassib and Yomna Abdel Rahman for their guidance, support and providing the necessary facilities and equipments to complete this project.

Share

COinS