可解释性
深度学习
计算机科学
人工智能
人工神经网络
卷积神经网络
循环神经网络
机器学习
图形
理论计算机科学
作者
Chen Liu,Haider Raza,Saugat Bhattacharyya
出处
期刊:Elsevier eBooks
[Elsevier]
日期:2023-01-01
卷期号:: 205-242
标识
DOI:10.1016/b978-0-323-85955-4.00010-7
摘要
This chapter mainly addresses the topic of deep learning methods applied in the field of neural signal processing. We started our discussion with basic neural network frameworks such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and hybrid networks frameworks, an important mechanism attention is also introduced for its breakthrough effect for machine learning tasks. Then we discussed about an emerging subfield graph neural network (GNN), which has attracted interests of researchers in communities, because models based on graphs are expressive at learning both structural and attributes at the same time, meanwhile in reality many data are naturally or can be purposely organized in the format of graphs. In terms of neural signals, it is especially appropriate to adopt GNNs for the analysis of brain connectomes. We discussed various types of GNNs based on their different ways of information aggregation approaches, namely convolutional, attention-based, and message passing flavors. Applications of GNNs on neural data are still in its early stage but several attempts have been made and paved a way as we exemplified. Despite the effectiveness of deep learning compared with traditional machine learning methods, it also suffers from interpretability and data greediness. For data feeding into the models are represented through hidden layers, what each layer means remains obscure. Meanwhile, large quantities of data (especially labelled ones) are needed for training a successful model which is usually not the case in domain specific neural data. In the future, efforts are expected to design deep learning, particularly graph-based deep learning methods to improve the current neuroscientific and engineering research.
科研通智能强力驱动
Strongly Powered by AbleSci AI