人工智能
计算机科学
任务(项目管理)
运动(物理)
动作(物理)
计算机视觉
图像(数学)
工程类
量子力学
物理
系统工程
作者
Danny Driess,Jung-Su Ha,Marc Toussaint
标识
DOI:10.15607/rss.2020.xvi.003
摘要
In this paper, we propose a deep convolutional recurrent neural network that predicts action sequences for task and motion planning (TAMP) from an initial scene image.Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g.first-order logic) with continuous motion planning such as nonlinear trajectory optimization.Due to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches.To circumvent this combinatorial complexity, we develop a neural network which, based on an initial image of the scene, directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem.A key aspect is that our method generalizes to scenes with many and varying number of objects, although being trained on only two objects at a time.This is possible by encoding the objects of the scene in images as input to the neural network, instead of a fixed feature vector.Results show runtime improvements of several magnitudes.Video: https://youtu.be/i8yyEbbvoEk
科研通智能强力驱动
Strongly Powered by AbleSci AI