内部威胁
知情人
计算机科学
计算机安全
过程(计算)
可信赖性
物联网
互联网隐私
政治学
法学
操作系统
作者
Mohammad Amiri-Zarandi,Hadis Karimipour,Rozita Dara
标识
DOI:10.1016/j.iot.2023.100965
摘要
An insider threat is a malicious action launched by authorized personnel inside the organization. Since insider actions may only leave a small digital footprint in the system, it is considered a major cybersecurity challenge in different applications. Along with the rapid growth of the Internet of Things (IoT) and the extensive attack surface in this technology, many concerns have been raised regarding the potential insider threats in IoT environments. Several studies have been conducted on Machine Learning (ML)-based insider threat detection solutions which are focused on the models' performance while the trustability of these models is neglected. Trustworthy Learning refers to a new trend in ML that focuses on ways to ensure that the data collection and data analysis procedures in ML techniques follow ethical applications and are trustable to human users. This approach enforces the acceptance and successful adoption of ML-based solutions. This study aims to propose an improved trustworthy insider threat detection method that ensures two of the trustworthy learning requirements: Privacy and Explainability. The proposed solution protects the privacy of the utilized data and is capable of explaining why certain behaviors are detected as a threat. The proposed solution also leverages data collaboration between different data owners to increase the volume of data used in the training process and enhance the performance of the ML model. Experimental results show the proposed solution outperforms the learning models trained by individual data holders.
科研通智能强力驱动
Strongly Powered by AbleSci AI