计算机科学
编码
人工智能
对象(语法)
运动(物理)
人机交互
机器人
集合(抽象数据类型)
图形
模态(人机交互)
任务(项目管理)
机器学习
计算机视觉
理论计算机科学
工程类
基因
生物化学
化学
程序设计语言
系统工程
作者
Weilin Wan,Lei Yang,Lingjie Liu,Zhuoying Zhang,Ruixing Jia,Yi‐King Choi,Jia Pan,Christian Theobalt,Taku Komura,Wenping Wang
出处
期刊:IEEE robotics and automation letters
日期:2022-04-01
卷期号:7 (2): 4702-4709
被引量:4
标识
DOI:10.1109/lra.2022.3151614
摘要
Understanding human intentions during interactions has been a long-lasting theme, that has applications in human-robot interaction, virtual reality and surveillance.In this study, we focus on full-body human interactions with large-sized daily objects and aim to predict the future states of objects and humans given a sequential observation of human-object interaction.As there is no such dataset dedicated to full-body human interactions with large-sized daily objects, we collected a largescale dataset containing thousands of interactions for training and evaluation purposes.We also observe that an object's intrinsic physical properties are useful for the object motion prediction, and thus design a set of object dynamic descriptors to encode such intrinsic properties.We treat the object dynamic descriptors as a new modality and propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.We also demonstrate the predicted results are useful for human-robot collaborations.
科研通智能强力驱动
Strongly Powered by AbleSci AI