计算机科学
具身认知
平面图(考古学)
对象(语法)
人机交互
表达式(计算机科学)
水准点(测量)
动作(物理)
机器人
人工智能
国家(计算机科学)
接口(物质)
多媒体
程序设计语言
物理
考古
大地测量学
量子力学
历史
地理
气泡
最大气泡压力法
并行计算
作者
Yanyuan Qiao,Yuankai Qi,Yu Zheng,Jing Liu,Qi Wu
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:1
标识
DOI:10.48550/arxiv.2308.10141
摘要
Many Vision-and-Language Navigation (VLN) tasks have been proposed in recent years, from room-based to object-based and indoor to outdoor. The REVERIE (Remote Embodied Referring Expression) is interesting since it only provides high-level instructions to the agent, which are closer to human commands in practice. Nevertheless, this poses more challenges than other VLN tasks since it requires agents to infer a navigation plan only based on a short instruction. Large Language Models (LLMs) show great potential in robot action planning by providing proper prompts. Still, this strategy has not been explored under the REVERIE settings. There are several new challenges. For example, the LLM should be environment-aware so that the navigation plan can be adjusted based on the current visual observation. Moreover, the LLM planned actions should be adaptable to the much larger and more complex REVERIE environment. This paper proposes a March-in-Chat (MiC) model that can talk to the LLM on the fly and plan dynamically based on a newly proposed Room-and-Object Aware Scene Perceiver (ROASP). Our MiC model outperforms the previous state-of-the-art by large margins by SPL and RGSPL metrics on the REVERIE benchmark.
科研通智能强力驱动
Strongly Powered by AbleSci AI