对抗制
计算机科学
数据库事务
稳健性(进化)
深度学习
人工智能
机器学习
交易数据
交易成本
嵌入
数据建模
数据库
财务
业务
基因
生物化学
化学
作者
Ivan Fursov,Matvey Morozov,Nina Kaploukhaya,Elizaveta Kovtun,Rodrigo Rivera-Castro,Gleb Gusev,Dmitry Babaev,Ivan Kireev,Alexey Zaytsev,Evgeny Burnaev
标识
DOI:10.1145/3447548.3467145
摘要
Machine learning models using transaction records as inputs are popular among financial institutions. The most efficient models use deep-learning architectures similar to those in the NLP community, posing a challenge due to their tremendous number of parameters and limited robustness. In particular, deep-learning models are vulnerable to adversarial attacks: a little change in the input harms the model's output. In this work, we examine adversarial attacks on transaction records data and defenses from these attacks. The transaction records data have a different structure than the canonical NLP or time-series data, as neighboring records are less connected than words in sentences, and each record consists of both discrete merchant code and continuous transaction amount. We consider a black-box attack scenario, where the attack doesn't know the true decision model and pay special attention to adding transaction tokens to the end of a sequence. These limitations provide a more realistic scenario, previously unexplored in the NLP world. The proposed adversarial attacks and the respective defenses demonstrate remarkable performance using relevant datasets from the financial industry. Our results show that a couple of generated transactions are sufficient to fool a deep-learning model. Further, we improve model robustness via adversarial training or separate adversarial examples detection. This work shows that embedding protection from adversarial attacks improves model robustness, allowing a wider adoption of deep models for transaction records in banking and finance.
科研通智能强力驱动
Strongly Powered by AbleSci AI