自我表露
手势
韵律
心理学
情态动词
干预(咨询)
过程(计算)
机器人
光学(聚焦)
认知心理学
计算机科学
人机交互
人工智能
社会心理学
语音识别
化学
高分子化学
物理
光学
精神科
操作系统
作者
Sharifa Alghowinem,Sooyeon Jeong,Kika Arias,Rosalind W. Picard,Cynthia Breazeal,Hae Won Park
标识
DOI:10.1109/fg52635.2021.9666969
摘要
Self-disclosure is an important part of mental health treatment process. As interactive technologies are becoming more widely available, many AI agents for mental health prompt their users to self-disclose as part of the intervention activities. However, most existing works focus on linguistic features to classify self-disclosure behavior, and do not utilize other multi-modal behavioral cues. We present analyses of people's non-verbal cues (vocal acoustic features, head orientation and body gestures/movements) exhibited during self-disclosure tasks based on the human-robot interaction data collected in our previous work. Results from the classification experiments suggest that prosody, head pose, and body postures can be independently used to detect self-disclosure behavior with high accuracy (up to 81%). Moreover, positive emotions, high engagement, self-soothing and positive attitudes behavioral cues were found to be positively correlated to self-disclosure. Insights from our work can help build a self-disclosure detection model that can be used in real time during multi-modal interactions between humans and AI agents.
科研通智能强力驱动
Strongly Powered by AbleSci AI