脑电图
解码方法
灵敏度(控制系统)
计算机科学
语言模型
语音识别
人工智能
心理学
神经科学
算法
电子工程
工程类
作者
Sijie Ling,A. St. J. Murphy,Alona Fyshe
摘要
Abstract The brain’s ability to perform complex computations at varying timescales is crucial, ranging from understanding single words to grasping the overarching narrative of a story. Recently, multi-timescale long short-term memory (MT-LSTM) models (Mahto et al. 2020; Jain et al. 2020) have been introduced, which use temporally-tuned parameters to induce sensitivity to different timescales of language processing (i.e. related to near/distant words). However, there has not been an exploration of the relation between such temporally-tuned information processing in MT-LSTMs and the brain’s language processing using high temporal resolution recording modalities, such as electroencephalography (EEG). To bridge this gap, we used an EEG dataset recorded while participants listened to Chapter 1 of “Alice in Wonderland” and trained ridge regression models to predict the temporally-tuned MT-LSTM embeddings from EEG responses. Our analysis reveals that EEG signals can be used to predict MT-LSTM embeddings across various timescales. For longer timescales, our models produced accurate predictions within an extended time window of ±2 s around word onset, while for shorter timescales, significant predictions are confined to a narrow window ranging from −180 ms to 790 ms. Intriguingly, we observed that short timescale information is not only processed in the vicinity of word onset but also at distant time points. These observations underscore the parallels and discrepancies between computational models and the neural mechanisms of the brain. As word embeddings are used more as in silico models of semantic representation in the brain, a more explicit consideration of timescale-dependent processing enables more targeted explorations of language processing in humans and machines.
科研通智能强力驱动
Strongly Powered by AbleSci AI