强化学习
计算机科学
装箱问题
马尔可夫决策过程
包装问题
过程(计算)
数学优化
人工智能
可用的
箱子
钥匙(锁)
算法
机器学习
马尔可夫过程
数学
统计
操作系统
计算机安全
万维网
作者
Hang Zhao,Chenyang Zhu,Xin Xu,Hui Huang,Kai Xu
标识
DOI:10.1007/s11432-021-3348-6
摘要
We tackle the online 3D bin packing problem (3D-BPP), a challenging yet practically useful variant of the classical bin packing problem. In this problem, the items are delivered to the agent without informing the full sequence information. The agent must directly pack these items into the target bin stably without changing their arrival order, and no further adjustment is permitted. Online 3D-BPP can be naturally formulated as a Markov decision process (MDP). We adopt deep reinforcement learning, in particular, the on-policy actor-critic framework, to solve this MDP with constrained action space. To learn a practically feasible packing policy, we propose three critical designs. First, we propose an online analysis of packing stability based on a novel stacking tree. It attains a high analysis accuracy while reducing the computational complexity from O(N2) to O(N log N), making it especially suited for reinforcement learning training. Second, we propose a decoupled packing policy learning for different dimensions of placement which enables high-resolution spatial discretization and hence high packing precision. Third, we introduce a reward function that dictates the robot to place items in a far-to-near order and therefore simplifies the collision avoidance in movement planning of the robotic arm. Furthermore, we provide a comprehensive discussion on several key implemental issues. The extensive evaluation demonstrates that our learned policy outperforms the state-of-the-art methods significantly and is practically usable for real-world applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI