机器人
人机交互
人机交互
触觉技术
计算机科学
人工智能
任务(项目管理)
移动机器人
接口(物质)
计算机视觉
控制工程
模拟
工程类
系统工程
气泡
最大气泡压力法
并行计算
作者
Xiangjie Yan,Yongpeng Jiang,Chen Chen,Leiliang Gong,Mingqiao Ge,Tao Zhang,Xiang Li
标识
DOI:10.1109/tcst.2023.3301675
摘要
There is invariably a tradeoff between safety and efficiency for collaborative robots (cobots) in human–robot collaborations (HRCs). Robots that interact minimally with humans can work with high speed and accuracy but cannot adapt to new tasks or respond to unforeseen changes, whereas robots that work closely with humans can but only by becoming passive to humans, meaning that their main tasks are suspended and efficiency compromised. Accordingly, this article proposes a new complementary framework for HRC that balances the safety of humans and the efficiency of robots. In this framework, the robot carries out given tasks using a vision-based adaptive controller, and the human expert collaborates with the robot in the null space. Such a decoupling drives the robot to deal with existing issues in task space e.g., uncalibrated camera, limited field of view (FOV) and null space (e.g., joint limits) by itself while allowing the expert to adjust the configuration of the robot body to respond to unforeseen changes (e.g., sudden invasion, change in environment) without affecting the robot’s main task. In addition, the robot can simultaneously learn the expert’s demonstration in task space and null space beforehand with dynamic movement primitives (DMPs). Therefore, an expert’s knowledge and a robot’s capability are explored and complement each other. Human demonstration and involvement are enabled via a mixed interaction interface, i.e., augmented reality (AR) and haptic devices. The stability of the closed-loop system is rigorously proved with Lyapunov methods. Experimental results in various scenarios are presented to illustrate the performance of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI