人工智能
背景(考古学)
深度学习
分类
计算机科学
数据科学
领域(数学)
基因组学
机器学习
认知科学
生物
基因组
心理学
古生物学
生物化学
数学
基因
纯数学
作者
Gherman Novakovsky,Nick Dexter,Maxwell W. Libbrecht,Wyeth W. Wasserman,Sara Mostafavi
标识
DOI:10.1038/s41576-022-00532-2
摘要
Artificial intelligence (AI) models based on deep learning now represent the state of the art for making functional predictions in genomics research. However, the underlying basis on which predictive models make such predictions is often unknown. For genomics researchers, this missing explanatory information would frequently be of greater value than the predictions themselves, as it can enable new insights into genetic processes. We review progress in the emerging area of explainable AI (xAI), a field with the potential to empower life science researchers to gain mechanistic insights into complex deep learning models. We discuss and categorize approaches for model interpretation, including an intuitive understanding of how each approach works and their underlying assumptions and limitations in the context of typical high-throughput biological datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI