计算机科学
动画
渲染(计算机图形)
计算机图形学(图像)
交互式骨架驱动仿真
计算机人脸动画
计算机动画
虚拟现实
混合现实
增强现实
人机交互
骨骼动画
运动捕捉
计算机图形学
计算机视觉
人工智能
运动(物理)
作者
Arjan Egges,George Papagiannakis,Nadia Magnenat‐Thalmann
标识
DOI:10.1007/s00371-007-0113-z
摘要
In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.
科研通智能强力驱动
Strongly Powered by AbleSci AI