Cover
Vol. 16 No. 2 (2020)

Published: December 31, 2020

Pages: 39-48

Original Article

Emotion Recognition Based on Mining Sub-Graphs of Facial Components

Abstract

Facial emotion recognition finds many real applications in the daily life like human robot interaction, eLearning, healthcare, customer services etc. The task of facial emotion recognition is not easy due to the difficulty in determining the effective feature set that can recognize the emotion conveyed within the facial expression accurately. Graph mining techniques are exploited in this paper to solve facial emotion recognition problem. After determining positions of facial landmarks in face region, twelve different graphs are constructed using four facial components to serve as a source for sub-graphs mining stage using gSpan algorithm. In each group, the discriminative set of sub-graphs are selected and fed to Deep Belief Network (DBN) for classification purpose. The results obtained from the different groups are then fused using Naïve Bayes classifier to make the final decision regards the emotion class. Different tests were performed using Surrey Audio-Visual Expressed Emotion (SAVEE) database and the achieved results showed that the system gives the desired accuracy (100%) when fusion decisions of the facial groups. The achieved result outperforms state-of-the-art results on the same database.

References

  1. M. Dubey and L. Singh, "Automatic Emotion Recognition Using Facial Expression: A Review", International Research Journal of Engineering and Technology (IRJET), vol. 3, no. 2, pp. 488-492, 2016.
  2. S.N. Mohammed, and A.K. Abdul Hassan, “A Survey on Emotion Recognition for Human Robot Interaction”, Journal of Computing and Information Technology, in press. Mohammed & Abdul Hassan
  3. C. Wu, J. Lin and W. Wei, "Survey on Audiovisual Emotion Recognition: Databases, Features, and Data Fusion Strategies", APSIPA Transactions on Signal and Information Processing, vol. 3, pp. 1-18, 2014.
  4. F. Cid, L.J. Manso and P. Nunez, "A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction", presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015.
  5. Z. Liu, M. Wu, W. Cao, L. Chen, J. Xu, R. Zhang, M. Zhou and J. Mao "A Facial Expression Emotion Recognition Based Human-Robot Interaction System", IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 4, pp. 668-676, 2017.
  6. V. Verma and L.K. Sharma, "A Comprehensive Survey on Human Facial Expression Detection", International Journal of Image Processing (IJIP), vol. 7, no. 2, pp. 171- 182, 2013.
  7. S. Haq and P.J.B. Jackson, "Speaker-Dependent Audio- Visual Emotion Recognition", presented at the Int. Conf. on Auditory-Visual Speech Processing, 2009.
  8. A. Meghdari, M. Alemi, A. G. Pour and A. Taheri, "Spontaneous Human-Robot Emotional Interaction through Facial Expressions", presented at the International Conference on Social Robotics, 2016.
  9. S. Dutta and V.B. Bari, "Review of Facial Expression Recognition System and Used Datasets", International Journal of Research in Engineering and Technology (IJRET), vol. 2, no. 12, pp.641-645, 2013.
  10. P. Barros and S. Wermter, "Developing Crossmodal Expression Recognition Based on a Deep Neural Model", Adapt Behav, vol. 24, pp. 373–396, 2016.
  11. W.G. Hatcher and W. Yu, "A Survey of Deep Learning: Platforms, Applications and Emerging Research Trends", Human-Centered Smart Systems and Technologies, vol. 6, pp. 24411- 24432, 2018.
  12. C.C. Aggarwal and H. Wang, Managing and Mining Graph Data, Springer, 2010.
  13. A.K. Hassan and S. N. Mohammed, “A Novel Facial Emotion Recognition Scheme Based on Graph Mining”, Defence Technology, vol. 16, no. 5, , pp.1062-1072, 2020.
  14. N.F. Samatova, W. Hendrix, J. Jenkins, K. Jenkins and A. Chakraborty, Practical Graph Mining With R, CRC Press, 2014.
  15. C. C. Aggarwal, Data Mining the Textbook, Springer, 2015.
  16. D. J. Cook and L.B. Holder, Mining Graph Data, John Wiley & Sons, 2007.
  17. B. Johnston and P. D. Chazal, "A Review of Image- Based Automatic Facial Landmark Identification Techniques," EURASIP Journal on Image and Video Processing, vol. 2018, no. 86, pp. 1-23, 2018.
  18. Y. Wu and Q. Ji, "Facial Landmark Detection: A Literature Survey," International Journal of Computer Vision, vol. 127, pp. 115–142, 2019.
  19. N. Wang, X. Gao, D. Tao, and X. Li, "Facial Feature Point Detection: A Comprehensive Survey," Neurocomputing, vol. 275, pp. 50-65, 2018.
  20. O. Çeliktutan, S. Ulukaya, and B. Sankur, "A Comparative Study of Face Landmarking Techniques," EURASIP Journal on Image and Video Processing, vol. 2013, pp. 1- 27, 2013.
  21. V. Kazemi and J. Sullivan, "One Millisecond Face Alignment with an Ensemble of Regression Trees", presented at the 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  22. A. Bondarenko and A. Borisov, “Research on the Classification Ability of Deep Belief Networks on Small and Medium Datasets”, Information Technology and Management Science, vol. 16, pp. 60-65, 2013.
  23. Surrey Audio-Visual Expressed Emotion (SAVEE) Database Website, http://kahlan.eps.surrey.ac.uk/savee/
  24. S. Mohammed, F. Alkinani, and Y. Hassan, “Automatic Computer Aided Diagnostic for COVID-19 Based on Chest X-Ray Image and Particle Swarm Intelligence”, International Journal of Intelligent Engineering and Systems, vol.13, no. 5, pp. 63-73, 2020.