计算机科学
词(群论)
人工智能
背景(考古学)
自然语言处理
向量空间
相似性(几何)
分布语义学
语义学(计算机科学)
基质(化学分析)
代表(政治)
类比
文字2vec
语义相似性
人机交互
计算机图形学(图像)
数学
语言学
几何学
政治学
法学
程序设计语言
材料科学
复合材料
古生物学
哲学
图像(数学)
政治
嵌入
生物
作者
Jeffrey Pennington,Richard Socher,Christopher Manning
出处
期刊:Empirical Methods in Natural Language Processing
日期:2014-01-01
被引量:24174
摘要
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.
科研通智能强力驱动
Strongly Powered by AbleSci AI