视觉伺服
计算机科学
人工智能
计算机视觉
控制器(灌溉)
深度学习
方案(数学)
对象(语法)
机器人学
机器人
RGB颜色模型
运动学
姿势
视觉控制
作者
Abdulrahman Al-Shanoon,Haoxiang Lang
标识
DOI:10.1016/j.robot.2022.104041
摘要
The critical challenge, for robot–object-interaction, is to estimate visually the pose of the target object in a 3D space and combine it into a vision-based control scheme in manipulation applications. This paper proposes a novel reliable framework for deep ConvNet combined with visual servoing using a single RGB camera. We introduce an extensive system called Deep-Visual-Servoing (DVS) that addresses an integration of: (I) training of deep-CNNs using synthetic dataset only and operates successfully in real-world scenario, (II) continuous 3 D object pose estimation as the sensing feedback in a 3D visual servoing control scheme, and (III) design, integration and experimentation of visual servoing approach based on Lyapunov’s theory. The proposed deep based learning approach, the kinematic modeling and controller design are experimentally verified and discussed using the 6 DOF UR5 manipulator.
科研通智能强力驱动
Strongly Powered by AbleSci AI