计算机科学
游戏娱乐
感知
解码方法
多媒体
人工智能
计算机视觉
心理学
电信
视觉艺术
艺术
神经科学
标识
DOI:10.1016/j.entcom.2024.100696
摘要
Affective computing researchers have been studying emotion recognition (ER) based on real-world facial photos and videos for a while now, and it's a very popular area. With the sounds produced by head posture, facial deformation, and lighting fluctuation, ER is still difficult to execute in the wild. This research employs a machine learning model for sustainable artificial intelligence (AI) in entertainment computing to perform emotion decoding for movie picture categorization. Emotion labels are produced for every video sample by the suggested model, which takes video data as input. First, we employ a face identification and selection process based on the video data to identify the most consequential face areas. Here, facial expressions from films have been gathered as input pictures, which have then been processed for noise reduction and normalisation. Then, in order to analyse the facial expressions in this image, it was segmented using the fuzzy K-means equalisation clustering model. Convolutional adversarial U-net graph neural networks have been used to classify the studied pictures for emotion decoding. Several movie-based emotion datasets are subjected to experimental analysis in order to determine accuracy, precision, recall, F-1 score, RMSE, and AUC. With excellent classification accuracy, the suggested deep learning paradigm shows promise in identifying the emotional shifts in gamers. The suggested method achieved 97% accuracy, 96% precision, 92% recall, 85% F-1 score, 79% RMSE, and 86% AUC.
科研通智能强力驱动
Strongly Powered by AbleSci AI