抓住
初始化
计算机科学
人工智能
机器人
强化学习
机器学习
监督学习
分歧(语言学)
工作区
功能可见性
人工神经网络
人机交互
语言学
哲学
程序设计语言
作者
Yanxu Hou,Jun Li,Zihan Fang,Xuechao Zhang
标识
DOI:10.1109/icnsc48988.2020.9238061
摘要
Generally, self-supervised learning of robotic grasp utilizes a model-free Reinforcement Learning method, e.g., a Deep Q-network (DQN). A DQN makes use of a high-dimensional Q-network to infer dense pixel-wise probability maps of affordances for grasping actions. Unfortunately, it usually leads to a time-consuming training process. Inspired by the initialization thought of optimization algorithms, we propose a method of initialization for accelerating self-supervised learning of robotic grasp. It pre-trains the Q-network by the supervised learning of affordance maps before the robotic grasp training. When applying the pre-trained Q-network a robot can be trained through self-supervised trial-and-error in a purposeful style to avoid meaningless grasping in empty regions. The Q-network is pre-trained by supervised learning on a small dataset with coarse-grained labels. We test the proposed method with Mean Square Error, Smooth L1, and Kullback-Leibler Divergence (KLD) as loss functions in the pre-training phase. The results indicate that the KLD loss function can predict accurately affordances with less noise in the empty regions. Also, our method is able to accelerate the self-supervised learning significantly in the early stage and shows little relevance to the sparsity of objects in the workspace.
科研通智能强力驱动
Strongly Powered by AbleSci AI