Rakibul Alam Nahin,Md. Tahmidul Islam,Abrar Kabir,Sadiya Afrin,Imtiaz Ahmed Chowdhury,Rafeed Rahman,Md. Golam Rabiul Alam
标识
DOI:10.1109/ccwc57344.2023.10099220
摘要
In this rapidly changing world, machine learning has been creating a huge impact in daily aspects from smart cities to self-driving cars. One of these will be the contribution to the brain-computer interface (BCI), where brain signals are used to identify the emotions of people during various events in people's lives. In this research paper, we are proposing a multi-channel emotion recognition based on Electroencephalo-gram (EEG), using a fusion of a graph convolutional network (GCN) model and 1D Convolutional Neural Network (CNN) which classify emotions better than various existing research. Convolutional models are best known for finding features and hidden properties, and a graph convolutional network is best for connected data, which uses nodes and graphs, along with the embedded neural network to train a graph. Graph convolutional layers can provide intrinsic properties within the graph, which are trained on top of CNN layers for a deeper level of feature classification, which can result in better classification results. We have used EEG signals, collected from datasets like Dreamer and GAMEEMO and used various data extraction, and feature extraction processes to extract important features, and passed it to our model to detect emotions in four categories (boring, calm, horror, excitement), leading to an accuracy of at most 98% and an average of 97.6% of the total experiments tested. Our research also shows that for larger sizes of data, the accuracy gets better.