模式
模态(人机交互)
计算机科学
一致性(知识库)
模棱两可
过程(计算)
人工智能
机器学习
社会科学
社会学
程序设计语言
操作系统
作者
Gang Wang,Jingling Ma,Gang Chen
标识
DOI:10.1016/j.dss.2022.113913
摘要
Financial statement fraud caused by listed companies directly jeopardizes the reliability the financial reporting process. Leveraging multimodal information for financial statement fraud detection (FSFD) has recently become of great interest to academic research and industrial applications. Unfortunately, the predictive ability of multimodal information in FSFD remains largely underexplored, particularly the fusion ambiguity embedded in and among multi-modalities. In this study, we propose a novel attention-based multimodal deep learning method, named RCMA, toward an accurate FSFD. RCMA synthesizes a fine-grained attention mechanism including three innovative attention modules, i.e., ratio-aware attention, chapter-aware attention, and modality-aware attention mechanism. The first two attention mechanisms help to liberate the extraordinary predictive power of the financial modality and the textual modality on FSFD, respectively. Moreover, the proposed modality-aware attention mechanism enables better coordination between the two modalities. Furthermore, to ensure effective learning on the attention-based multimodal embedding, we design a novel loss function named Focal and Consistency Loss, or FCL. It considers class-imbalance and modality-consistency simultaneously, to specialize the optimization of FSFD. The experimental results on the real-world dataset show that the proposed RCMA on FSFD task outperformed the state-of-the-art benchmarks. Furthermore, interpretation analysis visualizes the attention weights of different ratio groups, chapters, and modalities from RCMA, and illustrates how these interpretations influence stakeholders' decision process for FSFD.
科研通智能强力驱动
Strongly Powered by AbleSci AI