Exploring the Impact of Convolutional Neural Networks on Facial Emotion Detection and Recognition

Authors

  • Rexcharles Enyinna Donatus Africa Centre of Excellence on Technology Enhanced Learning (ACETEL), National Open University of Nigeria, Nigeria & Department of Aerospace Engineering, Air Force Institute of Technology, Nigeria
  • Ifeyinwa Happiness Donatus Department of Computer Science, Kaduna State University, Nigeria
  • Ubadike Osichinaka Chiedu Department of Aerospace Engineering, Air Force Institute of Technology, Nigeria

DOI:

https://doi.org/10.70112/ajes-2024.13.1.4241

Keywords:

Facial Emotion Recognition (FER), Deep Learning Algorithms, Convolutional Neural Networks (CNNS), Emotional Artificial Intelligence (EAI), Human-Centered Computing

Abstract

Emotional analytics is a fascinating blend of psychology and technology, with one of the primary methods for recognizing emotions involving facial expression analysis. Facial emotion detection has advanced significantly, utilizing deep learning algorithms to identify common emotions. In recent years, substantial progress has been made in automatic facial emotion recognition (FER). This technology has been applied across various industries to enhance interactions between humans and machines, particularly in human-centered computing and the emerging field of emotional artificial intelligence (EAI). Researchers focus on improving systems’ capabilities to recognize and interpret human facial expressions and behaviors in diverse contexts. The impact of convolutional neural networks (CNNs) on this field has been profound, as these networks have undergone significant development, leading to diverse architectures designed to address increasingly complex challenges. This article explores the latest advancements in automated emotion recognition using computational intelligence, emphasizing how contemporary deep learning models contribute to the field. It provides a review of recent developments in CNN architectures for FER over the past decade, demonstrating how deep learning-based methods and specialized databases collaborate to achieve highly accurate outcomes.

References

A. J. Shawon, A. Tabassum, and R. Mahmud, “Emotion detection using machine learning: An analytical review,” J. Inf. Technol. Manag., vol. 4, no. 1, pp. 32-43, 2024.

A. V. Geetha, T. Mala, D. Priyanka, and E. Uma, “Multimodal emotion recognition with deep learning: Advancements, challenges, and future directions,” Inf. Fusion, vol. 105, no. December 2023, 2024, doi: 10.1016/j.inffus.2023.102218.

W. Mellouk and W. Handouzi, “Facial emotion recognition using deep learning: Review and insights the future,” Procedia Comput. Sci., vol. 175, pp. 689-694, 2020, doi: 10.1016/j.procs.2020.07.101.

N. Ahmed, Z. Al Aghbari, and S. Girija, “A systematic survey on multimodal emotion recognition using learning algorithms,” Intell. Syst. Appl., vol. 17, no. December 2022, p. 200171, 2023, doi: 10.1016/j.iswa.2022.200171.

J.-H. Byun, S.-P. Kim, and S.-P. Lee, “Multi-modal emotion recognition using speech features,” Appl. Sci., 2021.

D. Ayata, Y. Yaslan, and M. E. Kamasak, “Emotion recognition from multimodal physiological signals for emotion aware healthcare systems,” J. Med. Biol. Eng., vol. 40, no. 2, pp. 149-157, 2020, doi: 10.1007/s40846-019-00505-7.

M. Taskiran, N. Kahraman, and C. E. Erdem, “Face recognition: Past, present and future (a review),” Digit. Signal Process., vol. 106, p. 102809, 2020, doi: 10.1016/j.dsp.2020.102809.

D. Hutchison, Survey High-Performance Modelling and Simulation for Big Data Applications, 2019, doi: 10.1007/978-3-030-16272-6.

A. Pise, H. Vadapalli, and I. Sanders, “Facial emotion recognition using temporal relational network: An application to e-learning,” Multimed. Tools Appl., vol. 81, no. 19, pp. 26633-26653, 2022, doi: 10.1007/s11042-020-10133-y.

S. AlZu’bi et al., “A novel deep learning technique for detecting emotional impact in online education,” Electronics, vol. 11, no. 18, pp. 1-24, 2022, doi: 10.3390/electronics11182964.

T. Kumar Arora et al., “Optimal facial feature based emotional recognition using deep learning algorithm,” Comput. Intell. Neurosci., vol. 2022, 2022, doi: 10.1155/2022/8379202.

I. Lasri, “Facial emotion recognition of students using convolutional neural network,” in Proc. 2019 Third Int. Conf. Intell. Comput. Data Sci., pp. 1-6, 2019.

A. Rahaman and W. Sait, “Developing a pain identification model using a deep learning technique,” vol. 3, pp. 1-9, 2024, doi: 10.57197/ JDR-2024-0028.

X. Peng, “Research on emotion recognition based on deep learning for mental health,” Inform., vol. 45, no. 1, pp. 127-132, 2021, doi: 10.314 49/inf.v45i1.3424.

J. Zhang, Z. Yin, P. Chen, and S. Nichele, “Emotion recognition using multi-modal data and machine learning techniques: A tutorial andreview,” Inf. Fusion, vol. 59, no. March 2019, pp. 103-126, 2020, doi: 10.1016/j.inffus.2020.01.011.

X. Chen, S. Member, and X. Lin, “Big data deep learning: Challenges and perspectives,” IEEE Access, vol. 2, 2014.

A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in Proc. 2016IEEE Winter Conf. Appl. Comput. Vision (WACV), 2016, doi: 10.1109/WACV.2016.7477450.

A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos,“Facial expression recognition with convolutional neural networks: Coping with few data and the training sample order,” Pattern Recognit., vol. 61, pp. 610-628, 2017, doi: 10.1016/j.patcog. 2016.07.026.

T. Le Paine and T. S. Huang, “Do deep neural networks learn facialaction units when doing expression recognition?” in Proc. IEEE Int. Conf. Image Process., pp. 19-27, 2015.

P. V. Rouast, M. T. P. Adam, and R. Chiong, “Deep learning for human affect recognition: Insights and new developments,” IEEE Trans. Affect. Comput., vol. 12, no. 2, pp. 524-543, 2021, doi: 10.1109/TAF FC.2018.2890471.

P. Viola and M. J. Jones, “Robust real-time face detection,” in Proc. IEEE Int. Conf. Comput. Vision (ICCV), 2004.

J. X. Y. Lek and J. Teo, “Academic emotion classification using FER: A systematic review,” Hum. Behav. Emerg. Technol., vol. 2023, 2023, doi: 10.1155/2023/9790005.

H. Zhou et al., “Exploring emotion features and fusion strategies for audio-video emotion recognition,” in Proc. 2019 IEEE Int. Conf. Multimodal Intell. (ICMI), 2019.

Y. Taigman, M. A. Ranzato, T. Aviv, and M. Park, “Deep Face: Closing the gap to human-level performance in face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014, doi:10.1109/CVPR.2014.220.

Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1891-1898, 2014, doi: 10.1109/CVPR.2014.244.

F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf.Comput. Vis. Pattern Recognit. (CVPR), pp. 815-823, 2015, doi: 10.1109/CVPR.2015.7298682.

M. Wang and W. Deng, “Deep face recognition: A survey,” arXivpreprint, pp. 1-31, 2019.

K. Grm, V. Struc, A. Artiges, M. Caron, and H. K. Ekenel, “Strengths and weaknesses of deep learning models for face recognition against image degradations,” IET Biometrics, vol. 7, no. 1, pp. 81-89, 2018, doi: 10.1049/iet-bmt.2017.0083.

Z. Zhang, W. Shen, S. Qiao, Y. Wang, B. Wang, and A. Yuille,“Robust face detection via learning small faces on hard images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1361-1370, 2020.

R. Ranjan et al., “Deep learning for understanding faces,” arXivpreprint, no. January, 2018.

Y. Li, B. Sun, T. Wu, and Y. Wang, “ConvNet and a 3D model,” arXivpreprint arXiv:1606.00850v3, pp. 1-16, 2016.

H. Jiang and E. Learned-Miller, “Face detection with the faster R-CNN,” in Proc. 2017 IEEE Winter Conf. Appl. Comput. Vision,pp. 1-6, 2017.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, pp. 1-14, 2016.

S. S. Farfade, “Multi-view face detection using deep convolutional neural networks,” Master’s thesis, 2015.

H. Li, Z. Lin, X. Shen, and J. Brandt, “A convolutional neural network cascade for face detection,” in Proc. 2015 IEEE Conf. Comput. Vis. Pattern Recognit., 2015.

S. Yang, P. Luo, C. C. Loy, and X. Tang, “From facial parts responses to face detection: A deep learning approach,” IEEE Trans. Pattern Anal. Mach. Intell., no. 3, pp. 3676-3684, 2015.

S. Yang, Y. Xiong, C. C. Loy, and X. Tang, “Face detection through scale-friendly deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017.

T. Zheng and W. Deng, “Cross-pose LFW: A database for studying cross-pose face recognition in unconstrained environments,” arXivpreprint, 2018.

F. Z. Canal et al., “A survey on facial emotion recognition techniques: A state-of-the-art literature review,” Inf. Sci. (Ny), vol. 582, pp. 593-617, 2022, doi: 10.1016/j.ins.2021.10.005.

C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image Vis. Comput., vol. 27, no. 6, pp. 803-816, 2009, doi: 10.1016/j.imavis. 2008.08.005.

T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recognition based on local directional pattern,” ETRI J., vol. 32, no. 5, pp. 784-794, 2010, doi: 10.4218/etrij.10.1510.0132.

R. Ranjan, S. Sankaranarayanan, C. D. Castillo, and R. Chellappa, “Anall-in-one convolutional neural network for face analysis,” arXivpreprint, 2017.

E. M. Hand and R. Chellappa, “Attributes for improved attributes: Amulti-task network utilizing implicit and explicit relationships forfacial attribute classification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 4068-4074, 2015.

Y. H. Liu, “Feature extraction and image recognition with convolutional neural networks,” J. Phys. Conf. Ser., vol. 1087, no. 6, 2018, doi: 10.1088/1742-6596/1087/6/062032.

C. Dev and A. Ganguly, “Sentiment analysis of review data: A deep learning approach using user-generated content,” arXiv preprint, vol. 12, no. 2, pp. 28-36, 2023.

S. Li and W. Deng, “Deep facial expression recognition: A survey,”IEEE Trans. Affect. Comput., vol. 13, no. 3, pp. 1195-1215, 2022, doi: 10.1109/TAFFC.2020.2981446.

A. Hussain, N. Saikia, and C. Dev, “Advancements in Indian sign language recognition systems: Enhancing communication and accessibility for the deaf and hearing impaired,” arXiv preprint, vol. 12, no. 2, pp. 37-49, 2023.

Z. Farhoudi and S. Setayeshi, “Fusion of deep learning features with mixture of brain emotional learning for audio-visual emotion recognition,” Speech Commun., vol. 127, pp. 92-103, 2021,doi: 10.1016/j.specom.2020.12.001.

Y. Sun, Y. Chen, and X. Wang, “Deep learning face representation by joint learning,” arXiv preprint, pp. 1-9, 2014.

X. Tang, “Deep ID3: Face recognition with very deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2-6, 2015.

K. Zhang, Z. Zhang, Z. Li, Y. Qiao, and R. Chellappa, “Joint face detection and alignment using multi-task cascaded convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1,pp. 1-5, 2016.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” BMVC, vol. 3, no. Section 3, 2015.

Y. Liu, H. Li, and X. Wang, “Rethinking feature discrimination and polymerization for large-scale recognition,” in Advances in Neural Information Processing Systems, NIPS, 2017.

X. Zhang, Z. Fang, Y. Wen, and Z. Li, “Range loss for deep face recognition with long-tailed training data,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5409-5418, 2017.

R. Ranjan, C. D. Castillo, and R. Chellappa, “L2-constrained softmaxloss for discriminative face verification,” 2017 IEEE International Conference on Computer Vision (ICCV), 2017.

J. Milgram, “Von Mises-Fisher mixture model-based deep learning: Application to face verification,” unpublished, pp. 1-16.

S. Prasanna, T. Reddy, S. T. Karri, S. R. Dubey, and S. Mukherjee,“Spontaneous facial micro-expression recognition using 3D spatio temporal convolutional neural networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 1-8.

L. Zhao, Z. Wang, and G. Zhang, “Facial expression recognition from video sequences based on spatial-temporal motion local binary pattern and Gabor multi orientation fusion histogram,” Comput. Intell. Neurosci., vol. 2017, 2017, Art. no. 7206041, doi: 10.1155/2017/7206041.

B. Hasani and M. H. Mahoor, “Facial expression recognition using enhanced deep 3D convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 30-40.

X. Pan, G. Ying, G. Chen, H. Li, and W. Li, “A deep spatial and temporal aggregation framework for video-based facial expression recognition,” IEEE Access, vol. 7, pp. 48807-48815, 2019, doi: 10.1109/ACCESS.2019.2907271.

H. Ye, Z. Wu, R. Zhao, X. Wang, Y. Jiang, and X. Xue, “Evaluating two-stream CNN for video classification,” unpublished, 2015.

X. Pan, W. Guo, X. Guo, W. Li, J. Wu, and J. Jinzhao, “Deep temporal-spatial aggregation for video-based,” Symmetry, vol. 11, no. 1, Art. no. 52, 2019, doi: 10.3390/sym11010052.

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” Advances in Neural Information Processing Systems, vol. 27, pp. 1-9, 2014.

D. Hamester, P. Barros, and S. Wermter, “Face expression recognition with a 2-channel convolutional neural network,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 2015, pp. 1787-1794.

I. Masi, Y. Wu, T. Hassner, and P. Natarajan, “Do we really need to collect millions of faces for effective face recognition?” in Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, Netherlands, 2016.

H. Li, J. Sun, Z. Xu, and L. Chen, “Multimodal 2D+3D facial expression recognition with deep fusion convolutional neural network,” IEEE Trans. Multimed., vol. 19, no. 12, pp. 2816-2831, 2017, doi: 10.1109/TMM.2017.2713408.

N. Jain, S. Kumar, A. Kumar, and P. Shamsolmoali, “Hybrid deep neural networks for face emotion recognition,” Pattern Recognit. Lett., vol. 115, pp. 101-106, 2018, doi: 10.1016/j.patrec.2018.04.010.

K. P. Rao, M. V. P. Chandra, and S. Rao, “Recognition of learners’ cognitive states using facial expressions in e-learning environments,” J. Univ. Shanghai Sci. Technol., vol. 22, no. 12, pp. 93-103, 2020.

K. Mohan, A. Seal, O. Krejcar, and A. Yazidi, “Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-13, 2021, doi: 10.1109/TIM.2020.3031835.

M. A. H. Akhand, S. Roy, N. Siddique, M. A. S. Kamal, and T. Shimamura, “Facial emotion recognition using transfer learning in the deep CNN,” Electron., vol. 10, no. 9, 2021, doi: 10.3390/electronics 10091036.

A. S. Qazi, M. S. Farooq, F. Rustam, M. G. Villar, C. L. Rodríguez, and I. Ashraf, “Occlusions and tilt in facial recognition systems,” Appl. Sci., vol. 12, no. 1, 2022.

J. Mohammed, E. I. Abbas, and Z. M. Abood, “A facial recognition using a combination of a novel one dimension deep CNN and LDA,” Mater. Today Proc., vol. 80, pp. 3594-3599, 2023, doi: 10.1016/j. matpr.2021.07.325.

C. Gautam and K. R. Seeja, “Facial emotion recognition using handcrafted features and CNN,” Procedia Comput. Sci., vol. 218, pp. 1295-1303, 2023, doi: 10.1016/j.procs.2023.01.108.

X. Tao et al., “Facial video-based non-contact emotion recognition: A multi-view features expression and fusion method,” Biomed. Signal Process. Control, vol. 96, 2024.

H. V. Manalu and A. P. Rifai, “Detection of human emotions through facial expressions using hybrid convolutional neural network-recurrent neural network algorithm,” Intell. Syst. Appl., vol. 21, p. 200339, 2024, doi: 10.1016/j.iswa.2024.200339.

Z. Y. Huang, C. C. Chiang, J. H. Chen, Y. C. Chen, and H. L. Chung, “A study on computer vision for facial emotion recognition,” Sci. Rep., pp. 1-13, 2023, doi: 10.1038/s41598-023-35446-4.

W. Dias and F. Andal, “Cross-dataset emotion recognition from facial expressions through convolutional neural networks,” J. Vis. Commun., 2023.

P. M. Ferreira and A. N. A. Rebelo, “Physiological inspired deep neural networks for emotion recognition,” IEEE Access, vol. 6, pp. 53930-53943, 2020, doi: 10.1109/ACCESS.2018.2870063.

S. Saxena, S. Tripathi, and T. S. B. Sudarshan, “An intelligent facial expression recognition system with emotion intensity classification,” Cogn. Syst. Res., vol. 74, pp. 39-52, 2022, doi: 10.1016/j.cogsys.2022.04.001.

Y. Cheng, B. Jiang, and K. Jia, “A deep structure for facial expression recognition under partial occlusion,” in 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014, doi: 10.1109/IIH-MSP.2014.59.

S. Ullah, J. Ou, Y. Xie, and W. Tian, “Facial expression recognition (FER) survey: A vision, architectural elements, and future directions,” PeerJ Comput. Sci., 2024, doi: 10.7717/peerj-cs.2024.

J. Mohapatra, T.-W. Weng, P.-Y. Chen, S. Liu, and L. Daniel, “A family of semantic perturbations,” unpublished, 2020.

Downloads

Published

24-04-2024

How to Cite

Donatus, R. E., Donatus, I. H., & Chiedu, U. O. (2024). Exploring the Impact of Convolutional Neural Networks on Facial Emotion Detection and Recognition. Asian Journal of Electrical Sciences, 13(1), 35–45. https://doi.org/10.70112/ajes-2024.13.1.4241