机器人
计算机科学
遥操作
人机交互
任务(项目管理)
变压器
人工智能
模拟
工程类
系统工程
电气工程
电压
作者
Henry Clever,Ankur Handa,Hammad Mazhar,Kevin Kit Parker,Omer Shapira,Qian Wan,Yashraj Narang,Iretiayo Akinola,Maya Çakmak,Dieter Fox
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:7
标识
DOI:10.48550/arxiv.2112.05129
摘要
Sharing autonomy between robots and human operators could facilitate data collection of robotic task demonstrations to continuously improve learned models. Yet, the means to communicate intent and reason about the future are disparate between humans and robots. We present Assistive Tele-op, a virtual reality (VR) system for collecting robot task demonstrations that displays an autonomous trajectory forecast to communicate the robot's intent. As the robot moves, the user can switch between autonomous and manual control when desired. This allows users to collect task demonstrations with both a high success rate and with greater ease than manual teleoperation systems. Our system is powered by transformers, which can provide a window of potential states and actions far into the future -- with almost no added computation time. A key insight is that human intent can be injected at any location within the transformer sequence if the user decides that the model-predicted actions are inappropriate. At every time step, the user can (1) do nothing and allow autonomous operation to continue while observing the robot's future plan sequence, or (2) take over and momentarily prescribe a different set of actions to nudge the model back on track. We host the videos and other supplementary material at https://sites.google.com/view/assistive-teleop.
科研通智能强力驱动
Strongly Powered by AbleSci AI