计算机科学
树(集合论)
质量(理念)
计算机安全
数据科学
数学
认识论
数学分析
哲学
作者
Olga Gadyatskaya,Dalia Papuc
出处
期刊:Communications in computer and information science
日期:2023-01-01
卷期号:: 245-260
被引量:1
标识
DOI:10.1007/978-981-99-7969-1_18
摘要
Attack trees are a popular method to represent cyberattack scenarios. It is often challenging for organizations to design attack trees for relevant systems and scenarios, as this requires advanced security expertise and the engagement of many stakeholders. In recent years, many studies in academic literature have proposed methods for automating attack tree creation from system models or from libraries of attack patterns. However, these approaches are not yet mature enough to be of practical use in organizations. The advent of large language models (LLMs) opens new opportunities for helping organizations in designing attack trees. We can envisage that organizations would be able to speed up attack tree design and benefit from LLMs like ChatGPT if they could rely on the quality of produced models. In this study, we investigate the feasibility of using ChatGPT to synthesize attack trees for specific scenarios. We propose a method to make ChatGPT to output attack tree-like models, we propose an approach to evaluate the quality of synthesized attack trees, and we evaluate these in two case studies. Our results show that LLMs like ChatGPT can indeed be valuable companions for designing attack trees. Yet, as expected, ChatGPT often fails to capture the meaning of the refinement operators, and the human analyst engaging with ChatGPT still needs to monitor the quality of the results.
科研通智能强力驱动
Strongly Powered by AbleSci AI