计算机科学
人工智能
稳健性(进化)
对抗制
异常检测
深度学习
机器学习
恶意软件
语义学(计算机科学)
序列学习
计算机安全
生物化学
基因
化学
程序设计语言
作者
Dongyang Zhan,Kai Tan,Lin Ye,Xiangzhan Yu,Hongli Zhang,Zheng He
标识
DOI:10.1109/tc.2023.3292001
摘要
Sequential deep learning models (e.g., RNN and LSTM) can learn the sequence features of software behaviors, such as API or syscall sequences. However, recent studies have shown that these deep learning-based approaches are vulnerable to adversarial samples. Attackers can use adversarial samples to change the sequential characteristics of behavior sequences and mislead malware classifiers. In this paper, an adversarial robustness anomaly detection method based on the analysis of behavior units is proposed to overcome this problem. We extract related behaviors that usually perform a behavior intention as a behavior unit, which contains the representative semantic information of local behaviors and can be used to improve the robustness of behavior analysis. By learning the overall semantics of each behavior unit and the contextual relationships among behavior units based on a multilevel deep learning model, our approach can mitigate perturbation attacks that target local and large-scale behaviors. In addition, our approach can be applied to both low-level and high-level behavior logs (e.g., API and syscall logs). The experimental results show that our approach outperforms all the compared methods, which indicates that our approach has better performance against obfuscation attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI