任务(项目管理)
计算机科学
人机交互
弹道
运动学
机器人
人机交互
运动捕捉
适应(眼睛)
任务分析
运动(物理)
人工智能
笛卡尔坐标系
模拟
工程类
系统工程
物理
光学
经典力学
天文
几何学
数学
作者
Zhiwei Liao,Marta Lorenzini,Mattia Leonori,Fei Zhao,Gedong Jiang,Arash Ajoudani
出处
期刊:IEEE robotics and automation letters
日期:2023-10-30
卷期号:9 (1): 359-366
被引量:3
标识
DOI:10.1109/lra.2023.3328366
摘要
This work presents an ergonomic and interactive human-robot collaboration (HRC) framework, through which new collaborative skills are extracted from a one-shot human demonstration and learned through Riemannian dynamic movement primitives (DMP). The proposed framework responds to human-robot interaction forces to adapt to the task requirements, while generating virtual "ergonomic forces" that guide the human toward more ergonomic postures, based on online monitoring of a kinematics-based index. The resulting motion is then integrated into the learned task trajectories. The framework is implemented on a mobile manipulator with a weighted whole-body Cartesian velocity controller, which meets the needs of large-scale HRC. To evaluate the proposed framework, a multi-subject experiment involving a human-robot co-carrying task is conducted. The performance of the ergo-interactive control in terms of task performance and ergonomics adaptation is verified under different experimental conditions. This is followed by a comparative statistical analysis. The experimental results show that the learned trajectory can be reproduced and generalized to several targets and adjusted online according to human preferences and ergonomics.
科研通智能强力驱动
Strongly Powered by AbleSci AI