强化学习
事后诸葛亮
杠杆(统计)
机器人
计算机科学
人工智能
模仿
任务(项目管理)
机械臂
机器人学习
机器人末端执行器
人机交互
模拟
机器学习
工程类
移动机器人
心理学
社会心理学
系统工程
认知心理学
作者
Jingchen Li,Haobin Shi,Kao‐Shing Hwang
出处
期刊:IEEE Transactions on Automation Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-10-16
卷期号:21 (4): 6217-6228
被引量:2
标识
DOI:10.1109/tase.2023.3323307
摘要
Leveraging reinforcement learning on high-precision decision-making in Robot Arm assembly scenes is a desired goal in the industrial community. However, tasks like Flexible Flat Cable (FFC) assembly, which require highly trained workers, pose significant challenges due to sparse rewards and limited learning conditions. In this work, we propose a goal-conditioned self-imitation reinforcement learning method for FFC assembly without relying on a specific end-effector, where both perception and behavior plannings are learned through reinforcement learning. We analyze the challenges faced by Robot Arm in high-precision assembly scenarios and balance the breadth and depth of exploration during training. Our end-to-end model consists of hindsight and self-imitation modules, allowing the Robot Arm to leverage futile exploration and optimize successful trajectories. Our method does not require rule-based or manual rewards, and it enables the Robot Arm to quickly find feasible solutions through experience relabeling, while unnecessary explorations are avoided. We train the FFC assembly policy in a simulation environment and transfer it to the real scenario by using domain adaptation. We explore various combinations of hindsight and self-imitation learning, and discuss the results comprehensively. Experimental findings demonstrate that our model achieves fast and advanced flexible flat cable assembly, surpassing other reinforcement learning-based methods. Note to Practitioners —The motivation of this article stems from the need to develop an efficient and accurate FFC assembly policy for 3C (Computer, Communication, and Consumer Electronic) industry, promoting the development of intelligent manufacturing. Traditional control methods are incompetent to complete such a high-precision task with Robot Arm due to the difficult-to-model connectors, and existing reinforcement learning methods cannot converge with restricted epochs because of the difficult goals or trajectories. To quickly learn a high-quality assembly for Robot Arm and accelerate the convergence speed, we combine the goal-conditioned reinforcement learning and self-imitation mechanism, balancing the depth and breadth of exploration. The proposal takes visual information and six-dimensions force as state, obtaining satisfactory assembly policies. We build a simulation scene by the Pybullet platform and pre-train the Robot Arm on it, and then the pre-trained policies can be reused in real scenarios with finetuning.
科研通智能强力驱动
Strongly Powered by AbleSci AI