强化学习
马尔可夫决策过程
计算机科学
人工智能
库存控制
数学优化
算法
马尔可夫链
时间范围
软件
妥协
马尔可夫过程
工程类
机器学习
运筹学
数学
统计
社会科学
社会学
程序设计语言
作者
Francesco Stranieri,Fabio Stella,Chaaben Kouki
标识
DOI:10.1080/00207543.2024.2311180
摘要
This study conducts a comprehensive analysis of deep reinforcement learning (DRL) algorithms applied to supply chain inventory management (SCIM), which consists of a sequential decision-making problem focussed on determining the optimal quantity of products to produce and ship across multiple capacitated local warehouses over a specific time horizon. In detail, we formulate the problem as a Markov decision process for a divergent two-echelon inventory control system characterised by stochastic and seasonal demand, also presenting a balanced allocation rule designed to prevent backorders in the first echelon. Through numerical experiments, we evaluate the performance of state-of-the-art DRL algorithms and static inventory policies in terms of both cost minimisation and training time while varying the number of local warehouses and product types and the length of replenishment lead times. Our results reveal that the Proximal Policy Optimization algorithm consistently outperforms other algorithms across all experiments, proving to be a robust method for tackling the SCIM problem. Furthermore, the (s, Q)-policy stands as a solid alternative, offering a compromise in performance and computational efficiency. Lastly, this study presents an open-source software library that provides a customisable simulation environment for addressing the SCIM problem, utilising a wide range of DRL algorithms and static inventory policies.
科研通智能强力驱动
Strongly Powered by AbleSci AI