计算机科学
自编码
人工智能
语音识别
Chord(对等)
自然语言处理
深度学习
分布式计算
标识
DOI:10.1109/tmm.2023.3276177
摘要
Emotion is one of the most crucial attributes of music. However, due to the scarcity of emotional music datasets, emotion-conditioned symbolic music generation using deep learning techniques has not been investigated in depth. In particular, no study explores conditional music generation with the guidance of emotion, and few studies adopt time-varying emotional conditions. To address these issues, first, we endow three public lead sheet datasets with fine-grained emotions by automatically computing the valence labels from the chord progressions. Second, we propose a novel and effective encoder-decoder architecture named EmoMusicTV to explore the impact of emotional conditions on multiple music generation tasks and to capture the rich variability of musical sequences. EmoMusicTV is a transformer-based variational autoencoder (VAE) that contains a hierarchical latent variable structure to model holistic properties of the music segments and short-term variations within bars. The piece-level and bar-level emotional labels are embedded in their corresponding latent spaces to guide music generation. Third, we pretrain EmoMusicTV with the lead sheet continuation task to further improve its performance on conditional melody or harmony generation. Experimental results demonstrate that EmoMusicTV outperforms previous methods on three tasks, i.e., melody harmonization, melody generation given harmony, and lead sheet generation. Ablation studies verify the significant roles of emotional conditions and hierarchical latent variable structure on conditional music generation. Human listening shows that the lead sheets generated by EmoMusicTV are closer to the ground truth (GT) and perform slightly worse than the GT in conveying emotional polarity.
科研通智能强力驱动
Strongly Powered by AbleSci AI