滑倒
人工智能
计算机科学
打滑(空气动力学)
触觉传感器
计算机视觉
保险丝(电气)
机器视觉
目标检测
情态动词
机器人视觉
机器人
模式识别(心理学)
移动机器人
工程类
航空航天工程
化学
高分子化学
电气工程
机械工程
作者
Gang Yan,Alexander Schmitz,Tito Pradhono Tomo,Sophon Somlor,Satoshi Funabashi,Shigeki Sugano
标识
DOI:10.1109/icra46639.2022.9811589
摘要
Detecting the onset/ongoing of slip, i.e. if a grasped object is slipping or will slip from the gripper while being lifted, is crucial. Conventionally, it is regarded as a tactile sensing related problem. However, recently multi-modal robotic learning has become popular and is expected to boost the performance. In this paper we propose a novel CNN-TCN model to fuse tactile and visual information for detecting the onset/ongoing of slip. In our experiments, two uSkin tactile sensors and one Realsense435i camera are used. Data is collected by randomly grasping and lifting 35 daily objects 1050 times in total. Furthermore, we compare our CNN-TCN model with the widely used CNN-LSTM model. As a result, our proposed model achieves a 88.75% detection accuracy and outperforms the CNN-LSTM model combined with different pretrained vision networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI