计算机科学
可解释性
利用
人工智能
机器学习
可靠性(半导体)
分类器(UML)
交通分类
领域(数学)
网络数据包
数据挖掘
计算机安全
功率(物理)
物理
数学
量子力学
纯数学
作者
Alfredo Nascita,Antonio Montieri,Giuseppe Aceto,Domenico Ciuonzo,Valerio Persico,Antonio Pescapé
出处
期刊:IEEE Transactions on Network and Service Management
[Institute of Electrical and Electronics Engineers]
日期:2023-06-01
卷期号:20 (2): 1267-1289
被引量:8
标识
DOI:10.1109/tnsm.2023.3246794
摘要
The promise of Deep Learning (DL) in solving hard problems such as network Traffic Classification (TC) is being held back by the severe lack of transparency and explainability of this kind of approaches. To cope with this strongly felt issue, the field of eXplainable Artificial Intelligence (XAI) has been recently founded, and is providing effective techniques and approaches. Accordingly, in this work we investigate interpretability via XAIbased techniques to understand and improve the behavior of state-of-the-art multimodal and multitask DL traffic classifiers. Using a publicly available security-related dataset (ISCX VPNNONVPN), we explore and exploit XAI techniques to characterize the considered classifiers providing global interpretations (rather than sample-based ones), and define a novel classifier, DISTILLER-EVOLVED, optimized along three objectives: performance, reliability, feasibility. The proposed methodology proves as highly appealing, allowing to much simplify the architecture to get faster training time and shorter classification time, as fewer packets must be collected. This is at the expenses of negligible (or even positive) impact on classification performance, while understanding and controlling the interplay between inputs, model complexity, performance, and reliability.
科研通智能强力驱动
Strongly Powered by AbleSci AI