语义学(计算机科学)
自然语言处理
计算机科学
人工智能
语言学
词汇语义学
心理学
词汇项目
程序设计语言
哲学
作者
Yang Yang,Luan Li,Simon De Deyne,Bing Li,Jing Wang,Qing Cai
摘要
Abstract To explain how the human brain represents and organizes meaning, many theoretical and computational language models have been proposed over the years, varying in their underlying computational principles and in the language samples based on which they are built. However, how well they capture the neural encoding of lexical semantics remains elusive. We used representational similarity analysis (RSA) to evaluate to what extent three models of different types explained neural responses elicited by word stimuli: an External corpus‐based word2vec model, an Internal free word association model, and a Hybrid ConceptNet model. Semantic networks were constructed using word relations computed in the three models and experimental stimuli were selected through a community detection procedure. The similarity patterns between language models and neural responses were compared at the community, exemplar, and word node levels to probe the potential hierarchical semantic structure. We found that semantic relations computed with the Internal model provided the closest approximation to the patterns of neural activation, whereas the External model did not capture neural responses as well. Compared with the exemplar and the node levels, community‐level RSA demonstrated the broadest involvement of brain regions, engaging areas critical for semantic processing, including the angular gyrus, superior frontal gyrus and a large portion of the anterior temporal lobe. The findings highlight the multidimensional semantic organization in the brain which is better captured by Internal models sensitive to multiple modalities such as word association compared with External models trained on text corpora.
科研通智能强力驱动
Strongly Powered by AbleSci AI