人工智能
计算机视觉
计算机科学
感知
触觉技术
观察员(物理)
机器人学
对象(语法)
机器人
姿势
心理学
神经科学
物理
量子力学
作者
Sudharshan Suresh,Haozhi Qi,Tingfan Wu,Taosha Fan,Luis A. Pineda,Mike Lambeta,Jitendra Malik,Mrinal Kalakrishnan,Roberto Calandra,Michael Kaess,Joseph D. Ortiz,Mustafa Mukadam
出处
期刊:Science robotics
[American Association for the Advancement of Science]
日期:2024-11-13
卷期号:9 (96)
被引量:7
标识
DOI:10.1126/scirobotics.adl0628
摘要
To achieve human-level dexterity, robots must infer spatial awareness from multimodal sensing to reason over contact interactions. During in-hand manipulation of novel objects, such spatial awareness involves estimating the object's pose and shape. The status quo for in-hand perception primarily uses vision and is restricted to tracking a priori known objects. Moreover, visual occlusion of objects in hand is imminent during manipulation, preventing current systems from pushing beyond tasks without occlusion. We combined vision and touch sensing on a multifingered hand to estimate an object's pose and shape during in-hand manipulation. Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem. We studied multimodal in-hand perception in simulation and the real world, interacting with different objects via a proprioception-driven policy. Our experiments showed final reconstruction
科研通智能强力驱动
Strongly Powered by AbleSci AI