面部表情
仿人机器人
非语言交际
人机交互
模仿
人工智能
人机交互
反向动力学
情感表达
计算机科学
机器人
手势
心理学
沟通
认知心理学
生物
生态学
作者
Yuhang Hu,Boyuan Chen,Jiong Lin,Yunzhe Wang,Yingke Wang,Cameron Mehlman,Hod Lipson
出处
期刊:Science robotics
[American Association for the Advancement of Science (AAAS)]
日期:2024-03-27
卷期号:9 (88)
被引量:4
标识
DOI:10.1126/scirobotics.adi4724
摘要
Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge is twofold: First, the actuation of an expressively versatile robotic face is mechanically challenging. A second challenge is knowing what expression to generate so that the robot appears natural, timely, and genuine. Here, we propose that both barriers can be alleviated by training a robot to anticipate future facial expressions and execute them simultaneously with a human. Whereas delayed facial mimicry looks disingenuous, facial coexpression feels more genuine because it requires correct inference of the human’s emotional state for timely execution. We found that a robot can learn to predict a forthcoming smile about 839 milliseconds before the human smiles and, using a learned inverse kinematic facial self-model, coexpress the smile simultaneously with the human. We demonstrated this ability using a robot face comprising 26 degrees of freedom. We believe that the ability to coexpress simultaneous facial expressions could improve human-robot interaction.
科研通智能强力驱动
Strongly Powered by AbleSci AI