计算机科学
人机交互
人机交互
机器人
人工智能
任务(项目管理)
动作(物理)
灵活性(工程)
模块化设计
感知
姿势
计算机视觉
工程类
生物
统计
量子力学
操作系统
物理
数学
神经科学
系统工程
作者
Xinyi Yu,Xin Zhang,Chengjun Xu,Linlin Ou
出处
期刊:Neurocomputing
[Elsevier]
日期:2024-01-01
卷期号:563: 126827-126827
被引量:1
标识
DOI:10.1016/j.neucom.2023.126827
摘要
This paper presents a human–robot interaction system (HRIS) that utilizes human perception and action recognition to enable the robot to understand human intentions and flexibly interact with humans. A monocular multi-person three-dimensional (3D) pose estimation method is first proposed to perceive multi-person two-dimensional (2D) and 3D poses in interaction scenarios. Furthermore, a 3D skeleton poses tracking approach is adopted to locate the identity of each person in consecutive frames and enhance interactive stability. Then, an action recognition model is developed, which exploits tracked pose features to recognize the intentions of humans. An action-controlled interaction system is built with a modular approach to ensure flexibility in meeting multiple task requirements and facilitating flexible interaction. In the system, a distance-based safety solution is designed to avoid collisions between humans and robots. Finally, experimental results are presented to demonstrate the feasibility and effectiveness of the proposed methods and system.
科研通智能强力驱动
Strongly Powered by AbleSci AI