计算机科学
语义相似性
语义计算
文字2vec
自然语言处理
自然语言理解
嵌入
人工智能
文字嵌入
语义压缩
背景(考古学)
情报检索
语义搜索
排名(信息检索)
帧(网络)
自然语言
语义技术
语义网
古生物学
生物
电信
标识
DOI:10.1016/j.csl.2018.12.008
摘要
Natural language understanding (NLU) is a core technology for implementing natural interfaces and has received much attention in recent years. While learning embedding models has yielded fruitful results in several NLP subfields, most notably Word2Vec, embedding correspondence has relatively not been well explored especially in the context of NLU, a task that typically extracts structured semantic knowledge from a text. A NLU embedding model can facilitate analyzing and understanding relationships between unstructured texts and their corresponding structured semantic knowledge, essential for both researchers and practitioners of NLU. Toward this end, we propose a framework that learns to embed semantic correspondence between text and its extracted semantic knowledge, called semantic frame. One key contributed technique is semantic frame reconstruction used to derive a one-to-one mapping between embedded vectors and their corresponding semantic frames. Embedding into semantically meaningful vectors and computing their distances in vector space provides a simple, but effective way to measure semantic similarities. With the proposed framework, we demonstrate three key areas where the embedding model can be effective: visualization, distance based semantic search, similarity-based intent classification and re-ranking.
科研通智能强力驱动
Strongly Powered by AbleSci AI