悲伤
心理学
模态(人机交互)
音乐剧
认知心理学
积极倾听
对比度(视觉)
刺激形态
幸福
音乐与情感
沟通
感觉系统
音乐教育
音乐
社会心理学
视觉艺术
人工智能
艺术
人机交互
愤怒
计算机科学
教育学
作者
Xin Zhou,Ying Choon Wu,Yingcan Zheng,Zilun Xiao,Maoping Zheng
标识
DOI:10.1177/03057356211042078
摘要
Previous studies on multisensory integration (MSI) of musical emotions have yielded inconsistent results. The distinct features of the music materials and different musical expertise levels of participants may account for that. This study aims to explore the neural mechanism for the audio-visual integration of musical emotions and infer the reasons for inconsistent results in previous studies by investigating the influence of the type of musical emotions and musical training experience on the mechanism. This fMRI study used a block-design experiment. Music excerpts were selected to express fear, happiness, and sadness, presented under audio only (AO) and audio-visual (AV) modality conditions. Participants were divided into two groups: one comprising musicians who had been musically trained for many years and the other non-musicians with no musical expertise. They assessed the type and intensity of musical emotion after listening to or watching excerpts. Brain regions related to MSI of emotional information and default mode network (DMN) are sensitive to sensory modality conditions and emotion-type changes. Participants in the non-musician group had more, and bilateral distribution of brain regions showed greater activation in the AV assessment stage. By contrast, the musician group had less and lateralized right-hemispheric distribution of brain regions.
科研通智能强力驱动
Strongly Powered by AbleSci AI