可解释性
情绪分析
计算机科学
人工智能
水准点(测量)
深度学习
钥匙(锁)
机器学习
人工神经网络
数据科学
情报分析
大地测量学
计算机安全
地理
作者
Arwa Diwali,Kawther Saeedi,Kia Dashtipour,Mandar Gogate,Erik Cambria,Amir Hussain
出处
期刊:IEEE Transactions on Affective Computing
[Institute of Electrical and Electronics Engineers]
日期:2023-07-17
卷期号:15 (3): 837-846
被引量:11
标识
DOI:10.1109/taffc.2023.3296373
摘要
Sentiment analysis can be used to derive knowledge that is connected to emotions and opinions from textual data generated by people. As computer power has grown, and the availability of benchmark datasets has increased, deep learning models based on deep neural networks have emerged as the dominant approach for sentiment analysis. While these models offer significant advantages, their lack of interpretability poses a major challenge in comprehending the rationale behind their reasoning and prediction processes, leading to complications in the models' explainability. Further, only limited research has been carried out into developing deep learning models that describe their internal functionality and behaviors. In this timely study, we carry out a first of its kind overview of key sentiment analysis techniques and eXplainable artificial intelligence (XAI) methodologies that are currently in use. Furthermore, we provide a comprehensive review of sentiment analysis explainability.
科研通智能强力驱动
Strongly Powered by AbleSci AI