计算机科学
语音识别
短语
音乐形式
人工智能
分割
音乐剧
艺术
视觉艺术
作者
Xiangbin Teng,Pauline Larrouy-Maestri,David Poeppel
标识
DOI:10.1523/jneurosci.1331-23.2024
摘要
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range around 0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 seconds) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using non-invasive recording techniques. Significance statement Many music types are characterized by complex, hierarchical structures that evolve over time, requiring listeners to construct high-level musical structures, anticipate future content, and track notes and beats. There exists little evidence of how the brain performs online structural-level musical segmentation and prediction. This study reveals an ultralow-frequency neural component that modulates beat tracking and reliably correlates with parsing musical phrases. We further identified a phenomenon called "phrase phase precession," indicating that listeners use the ongoing listening experience to build structural predictions and track phrase boundaries. This study provides new conceptual and technical tools for studying the operation underlying structure building in various abstract musical features, using non-invasive recording techniques such as EEG or MEG.
科研通智能强力驱动
Strongly Powered by AbleSci AI