串联(数学)
计算机科学
安全性令牌
关系(数据库)
情态动词
关系抽取
数据挖掘
人工智能
特征(语言学)
特征提取
模式识别(心理学)
机器学习
数学
哲学
组合数学
语言学
化学
高分子化学
计算机安全
作者
Shan Zhao,Minghao Hu,Zhiping Cai,Fang Liu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2021-08-25
卷期号:34 (3): 1122-1131
被引量:30
标识
DOI:10.1109/tnnls.2021.3104971
摘要
Joint extraction of entities and their relations benefits from the close interaction between named entities and their relation information. Therefore, how to effectively model such cross-modal interactions is critical for the final performance. Previous works have used simple methods, such as label-feature concatenation, to perform coarse-grained semantic fusion among cross-modal instances but fail to capture fine-grained correlations over token and label spaces, resulting in insufficient interactions. In this article, we propose a dynamic cross-modal attention network (CMAN) for joint entity and relation extraction. The network is carefully constructed by stacking multiple attention units in depth to dynamic model dense interactions over token-label spaces, in which two basic attention units and a novel two-phase prediction are proposed to explicitly capture fine-grained correlations across different modalities (e.g., token-to-token and label-to-token). Experiment results on the CoNLL04 dataset show that our model obtains state-of-the-art results by achieving 91.72% F1 on entity recognition and 73.46% F1 on relation classification. In the ADE and DREC datasets, our model surpasses existing approaches by more than 2.1% and 2.54% F1 on relation classification. Extensive analyses further confirm the effectiveness of our approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI