多义
背景(考古学)
计算机科学
词(群论)
差异(会计)
判决
自然语言处理
代表(政治)
一致性(知识库)
人工智能
光学(聚焦)
语言学
古生物学
哲学
物理
会计
光学
政治
政治学
法学
业务
生物
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:32: 639-650
标识
DOI:10.1109/taslp.2023.3337643
摘要
Contextualized word embeddings in language models have given much advance to NLP. Intuitively, sentential information is integrated into the representation of words, which can help model polysemy. However, context sensitivity also leads to the variance of representations, which may break the semantic consistency for synonyms. Previous works that investigate contextualized sensitivity focus on the token level representations, while we are taking a deeper dive into exploring representations at the fine-grained sense level. In particular, we quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models, the results show that contextualized embeddings can be highly consistent across contexts, even for two different words with the same sense. In addition, part-of-speech, number of word senses, and sentence length have an influence on the variance of sense representations. Interestingly, we find that word representations are position-biased, where the first words in different contexts tend to be more similar. We analyze such a phenomenon and also propose a prompt-augmentation method to alleviate such bias in distance-based word sense disambiguation settings. Finally, we investigate the influence of sense-level pre-training on the performance of different downstream tasks, results show that such external tasks can improve the sense- and syntactic-related tasks, while not necessarily benefiting general language understanding tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI