Back to Journals » International Journal of Nanomedicine » Volume 6

Human facial neural activities and gesture recognition for machine-interfacing applications

Authors Hamedi, Salleh SH, Tan, Ismail, Ali J, Dee-Uam, Pavaganun, Yupapin P

Published 16 December 2011 Volume 2011:6 Pages 3461—3472


Review by Single anonymous peer review

Peer reviewer comments 2

M Hamedi1, Sh-Hussain Salleh2, TS Tan2, K Ismail2, J Ali3, C Dee-Uam4, C Pavaganun4, PP Yupapin5
1Faculty of Biomedical and Health Science Engineering, Department of Biomedical Instrumentation and Signal Processing, University of Technology Malaysia, Skudai, 2Centre for Biomedical Engineering Transportation Research Alliance, 3Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia (UTM), Johor Bahru, Malaysia; 4College of Innovative Management, Valaya Alongkorn Rajabhat University, Pathum Thani, 5Nanoscale Science and Engineering Research Alliance (N'SERA), Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand

Abstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2–11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

Keywords: neural system, neural activity, electromyography, machine learning, muscle activity

Creative Commons License © 2011 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.