已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

COAL: Robust Contrastive Learning‐Based Visual Navigation Framework

稳健性(进化) 人工智能 计算机科学 机器人 一般化 机器学习 计算机视觉 数学 数学分析 生物化学 化学 基因
作者
Zengmao Wang,Jianhua Hu,Q.R Tang,Wei Gao
出处
期刊:Journal of Field Robotics [Wiley]
标识
DOI:10.1002/rob.22508
摘要

ABSTRACT Real‐world robots will face a wide variety of complex environments when performing navigation or exploration tasks, especially in situations where the robots have never been seen before. Usually, robots need to establish local or global maps and then use path planning algorithms to determine their routes. However, in some environments, such as a wild grassy path or pavement on either side of a road, it is difficult for robots to plan routes through navigation maps. To address this, we propose a robust framework for robot navigation using contrastive learning called Contrastive Observation–Action in Latent (COAL) space. To extract features from the action space and observation space, respectively, COAL uses two different encoders. At the training stage, COAL does not require any data annotation and a mask approach is employed to keep features with significant differences away from each other in latent space. Similar to multimodal contrastive learning, we maximize bidirectional mutual information to align the features of observations and action sequences in latent space, which can enhance the generalization of the model. At the deployment stage, robots only need the current image as observation to complete exploration tasks. The most suitable action sequence is selected from the sampled data for generating control signals. We evaluate the robustness of COAL in both simulation and real environments. Only 41 min of unlabeled training data is required to allow COAL to explore environments that have never been seen before, even at night. Compared with state‐of‐the‐art methods, COAL has the strongest robustness and generalization ability. More importantly, the robustness of COAL is further improved by augmenting our training data using other open‐source data sets, which indicates that our framework has great potential to extract deep features of observations and action sequences. Our code and trained models are available at https://github.com/wzm206/COAL .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
SCL完成签到,获得积分10
1秒前
1秒前
wssamuel完成签到 ,获得积分10
4秒前
lalala发布了新的文献求助10
6秒前
一段微风发布了新的文献求助10
7秒前
齐桉完成签到 ,获得积分10
8秒前
江城子完成签到,获得积分10
8秒前
10秒前
iNk应助yy采纳,获得20
11秒前
俏皮的雅绿完成签到,获得积分10
11秒前
iNk应助yy采纳,获得20
11秒前
Jasper应助初见采纳,获得10
12秒前
12秒前
14秒前
烟花应助科研通管家采纳,获得10
14秒前
小二郎应助科研通管家采纳,获得10
14秒前
wanci应助科研通管家采纳,获得10
14秒前
大踏步应助科研通管家采纳,获得10
14秒前
丘比特应助科研通管家采纳,获得10
14秒前
大踏步应助科研通管家采纳,获得10
14秒前
天天快乐应助科研通管家采纳,获得10
14秒前
15秒前
帅仁123发布了新的文献求助10
16秒前
悦耳安莲完成签到,获得积分20
19秒前
21秒前
英俊的铭应助俏皮的雅绿采纳,获得10
22秒前
24秒前
25秒前
NexusExplorer应助CATH采纳,获得10
25秒前
Rocky发布了新的文献求助10
27秒前
小齐发布了新的文献求助10
29秒前
30秒前
31秒前
科研通AI5应助帅仁123采纳,获得10
31秒前
Rocky完成签到,获得积分10
32秒前
浅辰完成签到 ,获得积分10
33秒前
慕青应助阿贾采纳,获得10
34秒前
35秒前
一段微风完成签到,获得积分10
36秒前
38秒前
高分求助中
All the Birds of the World 4000
Production Logging: Theoretical and Interpretive Elements 3000
Animal Physiology 2000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Am Rande der Geschichte : mein Leben in China / Ruth Weiss 1500
CENTRAL BOOKS: A BRIEF HISTORY 1939 TO 1999 by Dave Cope 1000
Machine Learning Methods in Geoscience 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3736516
求助须知:如何正确求助?哪些是违规求助? 3280362
关于积分的说明 10019382
捐赠科研通 2996986
什么是DOI,文献DOI怎么找? 1644338
邀请新用户注册赠送积分活动 781922
科研通“疑难数据库(出版商)”最低求助积分说明 749641