可解释性
人工智能
深度学习
工作流程
背景(考古学)
计算机科学
机器学习
数据科学
感知器
人工神经网络
临床决策支持系统
决策支持系统
数据库
生物
古生物学
作者
José Pereira Amorim,Pedro Henriques Abreu,Alberto Fernández,Mauricio Reyes,Inês Domingues,M.H. Abreu
出处
期刊:IEEE Reviews in Biomedical Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:16: 192-207
被引量:9
标识
DOI:10.1109/rbme.2021.3131358
摘要
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using "deep learning techniques," "interpretability" and "oncology" as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
科研通智能强力驱动
Strongly Powered by AbleSci AI