人工智能
计算机科学
机器人
稳健性(进化)
人工神经网络
深层神经网络
任务(项目管理)
在飞行中
机器人学
人机交互
机器学习
计算机视觉
工程类
生物化学
化学
系统工程
基因
操作系统
作者
Makram Chahine,Ramin Hasani,Patrick Kao,Aaron Ray,Ryan Shubert,Mathias Lechner,Alexander Amini,Daniela Rus
出处
期刊:Science robotics
[American Association for the Advancement of Science (AAAS)]
日期:2023-04-19
卷期号:8 (77)
被引量:16
标识
DOI:10.1126/scirobotics.adc8892
摘要
Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations and generalize well to online and unseen scenarios within the same environment they have been trained on. It is challenging for these agents to take a step further and robustly generalize to new environments with drastic scenery changes that they have never encountered. Here, we present a method to create robust flight navigation agents that successfully perform vision-based fly-to-target tasks beyond their training environment under drastic distribution shifts. To this end, we designed an imitation learning framework using liquid neural networks, a brain-inspired class of continuous-time neural models that are causal and adapt to changing conditions. We observed that liquid agents learn to distill the task they are given from visual inputs and drop irrelevant features. Thus, their learned navigation skills transferred to new environments. When compared with several other state-of-the-art deep agents, experiments showed that this level of robustness in decision-making is exclusive to liquid networks, both in their differential equation and closed-form representations.
科研通智能强力驱动
Strongly Powered by AbleSci AI