可解释性
特征选择
选择(遗传算法)
计算机科学
人工神经网络
人工智能
鉴定(生物学)
随机性
错误发现率
机器学习
特征(语言学)
生物
遗传学
数学
基因
植物
统计
哲学
语言学
作者
Peyman Hosseinzadeh Kassani,Fred Lu,Yann Le Guen,Michaël E. Belloy,Zihuai He
标识
DOI:10.1038/s42256-022-00525-0
摘要
Deep neural networks (DNNs) have been successfully utilized in many scientific problems for their high prediction accuracy, but their application to genetic studies remains challenging due to their poor interpretability. Here we consider the problem of scalable, robust variable selection in DNNs for the identification of putative causal genetic variants in genome sequencing studies. We identified a pronounced randomness in feature selection in DNNs due to its stochastic nature, which may hinder interpretability and give rise to misleading results. We propose an interpretable neural network model, stabilized using ensembling, with controlled variable selection for genetic studies. The merit of the proposed method includes: flexible modelling of the nonlinear effect of genetic variants to improve statistical power; multiple knockoffs in the input layer to rigorously control the false discovery rate; hierarchical layers to substantially reduce the number of weight parameters and activations, and improve computational efficiency; and stabilized feature selection to reduce the randomness in identified signals. We evaluate the proposed method in extensive simulation studies and apply it to the analysis of Alzheimer's disease genetics. We show that the proposed method, when compared with conventional linear and nonlinear methods, can lead to substantially more discoveries.
科研通智能强力驱动
Strongly Powered by AbleSci AI