计算机科学
解码方法
学习迁移
脑-机接口
深度学习
人工智能
联营
可扩展性
机器学习
脑电图
数据库
心理学
电信
精神科
作者
Xiang Wei,A. Aldo Faisal
标识
DOI:10.1109/ner52421.2023.10123713
摘要
Deep learning is the state-of-the-art in BCI decoding. However, it is very data-hungry and training decoders requires pooling data from multiple sources. EEG data from various sources decrease the decoding performance due to negative transfer [1]. Recently, transfer learning for EEG decoding has been suggested as a remedy [2], [3] and become subject to recent BCI competitions (e.g. BEETL [4]), but there are two complications in combining data from many subjects. First, privacy is not protected as highly personal brain data needs to be shared (and copied across increasingly tight information governance boundaries). Moreover, BCI data are collected from different sources and are often with different BCI tasks, which has been thought to limit their reusability. Here, we demonstrate a federated deep transfer learning technique, the Multi-dataset Federated Separate-Common-Separate Network (MF-SCSN) based on our previous work of SCSN [1], which integrates privacy-preserving properties into deep transfer learning to utilise data sets with different tasks. This framework trains a BCI decoder using different source data sets from different imagery tasks (e.g. some data sets with hands and feet, vs others with single hands and tongue, etc). Therefore, by introducing privacy-preserving transfer learning techniques, we unlock the reusability and scalability of existing BCI data sets. We evaluated our federated transfer learning method on the NeurIPS 2021 BEETL competition BCI task. The proposed architecture outperformed the baseline decoder by 3%. Moreover, compared with the baseline and other transfer learning algorithms, our method protects the privacy of the brain data from different data centres.
科研通智能强力驱动
Strongly Powered by AbleSci AI