可解释性
自杀意念
机器学习
人工智能
计算机科学
构思
支持向量机
社会化媒体
自杀未遂
过程(计算)
召回
深度学习
心理学
自杀预防
毒物控制
认知心理学
万维网
医学
认知科学
环境卫生
操作系统
作者
Rafiqul Islam,Md. Kowsar Hossain Sakib,Shanjita Akter Prome,Xianzhi Wang,Anwaar Ulhaq,Cesar Sanín,David Asirvatham
标识
DOI:10.1109/besc59560.2023.10386773
摘要
Suicide is one of the major causes of death globally. Analysis of social media posts and in-depth insights show that some people have suicide ideas. In order to save more lives, it is crucial to comprehend the behavior of suicidal attempters. However, identifying and explaining suicidal thoughts poses a significant challenge in psychiatry. Additionally, analysing suicidal behavior is a complex procedure involving several variables based on the individual’s preferences and the data type. Although traditional methods have been utilized to identify clinical factors for suicide ideation detection (SID), these models often lack interpretability and understanding. Therefore, the primary aim of this research is to apply several deep learning (DL) and machine learning (ML) techniques such as BERT, LSTM, BiLSTM, RF, SVM, GaussianNB, LR, and KNeighbors blending with interpretable models such as LIME and SHAP to provide valuable insights into the importance of different features and make models more transparent in the SID process. The experiments were conducted on a publicly available dataset comprising 24,101 posts, categorized as either suicidal or non-suicidal. The implemented method brings about significant enhancements in performance in comparison. A comparison of all performance measures reveals that the LSTM model is particularly good at processing and classifying textual data, with higher accuracy, precision, recall, and AUC scores than the other models tested.
科研通智能强力驱动
Strongly Powered by AbleSci AI