计算机科学
脆弱性(计算)
线性回归
回归
回归分析
供应链
机器学习
人工智能
风险分析(工程)
计量经济学
计算机安全
业务
统计
经济
数学
营销
作者
Jian Chen,Yuan Gao,Jinyong Shan,Kai Peng,Chen Wang,Hongbo Jiang
标识
DOI:10.1109/tii.2022.3175958
摘要
Demand forecasting (DF) plays an essential role in supply chain management, as it provides an estimate of the goods that customers are expected to purchase in the foreseeable future. While machine learning techniques are widely used for building DF models, they also become more susceptible to data poisoning attacks. In this article, we study the vulnerability of targeted poisoning attacks for linear regression DF models, where the attacker controls the behavior of forecasting models on a specific target sample without compromising the overall forecasting performance. We devise a gradient-optimization framework for targeted regression poisoning in white-box settings, and further design a regression value manipulation strategy for targeted poisoning in black-box settings. We also discuss some possible countermeasures to defend against our attacks. Extensive experiments are conducted on two real-world datasets with four linear regression models. The results demonstrate that our attacks are very effective, and can achieve a high prediction deviation with control of less than 1% of the training samples.
科研通智能强力驱动
Strongly Powered by AbleSci AI