感知
心理学
功能磁共振成像
多传感器集成
言语感知
功能集成
神经科学
认知心理学
计算机科学
数学
数学分析
积分方程
作者
Soibam Shyamchand Singh,Abhishek Mukherjee,P. Raghunathan,Dipanjan Ray,Arpan Banerjee
标识
DOI:10.1093/cercor/bhae323
摘要
Abstract Speech perception requires the binding of spatiotemporally disjoint auditory–visual cues. The corresponding brain network-level information processing can be characterized by two complementary mechanisms: functional segregation which refers to the localization of processing in either isolated or distributed modules across the brain, and integration which pertains to cooperation among relevant functional modules. Here, we demonstrate using functional magnetic resonance imaging recordings that subjective perceptual experience of multisensory speech stimuli, real and illusory, are represented in differential states of segregation–integration. We controlled the inter-subject variability of illusory/cross-modal perception parametrically, by introducing temporal lags in the incongruent auditory–visual articulations of speech sounds within the McGurk paradigm. The states of segregation–integration balance were captured using two alternative computational approaches. First, the module responsible for cross-modal binding of sensory signals defined as the perceptual binding network (PBN) was identified using standardized parametric statistical approaches and their temporal correlations with all other brain areas were computed. With increasing illusory perception, the majority of the nodes of PBN showed decreased cooperation with the rest of the brain, reflecting states of high segregation but reduced global integration. Second, using graph theoretic measures, the altered patterns of segregation–integration were cross-validated.
科研通智能强力驱动
Strongly Powered by AbleSci AI