计算机科学
跨度(工程)
接头(建筑物)
情态动词
背景(考古学)
语义学(计算机科学)
关系(数据库)
模式
过程(计算)
光学(聚焦)
关系抽取
人工智能
信息抽取
自然语言处理
模式识别(心理学)
机器学习
数据挖掘
建筑工程
社会科学
生物
程序设计语言
高分子化学
化学
操作系统
古生物学
土木工程
社会学
工程类
物理
光学
作者
Qian Wan,Luona Wei,Shan Zhao,Jie Liu
标识
DOI:10.1016/j.knosys.2022.110228
摘要
Joint extraction of entities and their relations not only depends on entity semantics but also highly correlates with contextual information and entity types. Therefore, an effective joint modelling method designed for handling information from different modalities can lead to a superior performance of the joint entity and relation extraction. Previous span-based models tended to focus on the internal semantics of a span but failed to effectively capture the interactions between the span and other modal information (such as tokens or labels). In this study, a Span-based Multi-Modal Attention Network (SMAN) is proposed for joint entity and relation extraction. The network introduces a cloze mechanism to simultaneously extract the context and span position information, and jointly models the span and label in the relation extraction stage. To determine the fine-grained associations between different modalities, a Modal-Enhanced Attention (MEA) module with two modes is designed and adopted in the modelling process. Experimental results reveal that the proposed model consistently outperforms the state-of-the-art for both entity recognition and relation extraction on the SciERC and ADE datasets, and beats other competing approaches by more than 1.42% F1 score for relation extraction on the CoNLL04 dataset. Extensive additional experiments further verify the effectiveness of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI