众包
计算机科学
偏爱
概率逻辑
比例(比率)
语音活动检测
语音识别
自然语言处理
人工智能
机器学习
数据科学
语音处理
万维网
统计
物理
数学
量子力学
作者
Rafael Zequeira Jiménez,Laura Fernández Gallardo,Sebastian Möller
标识
DOI:10.1109/qomex.2017.7965678
摘要
Crowdsourcing has established itself as a powerful tool to collect human input for data acquisition and labeling. Conventional laboratory experiments can now be addressed to a wider and diverse audience. This paper presents a study performed both in a laboratory and on a mobile-crowdsourcing platform, adopting a paired-comparison setup to obtain ratings of voice likability. We show considerations taken to adequately adapt the laboratory-based test to the remote-labour approach. Once all pair-comparison answers were collected, preference choice matrices were built and the Bradley-Terry-Luce probabilistic choice model was applied to estimate a ratio scale of preferences, reflecting the voice likability scores. Our results show a strong correlation between the scores obtained by the two approaches considered, which indicates the validity of crowdsourcing for the acquisition of voice likability ratings. This is of great benefit when datasets need to be quickly and reliably labeled for speech applications relying on detection or on synthesis of speaker and voice characteristics.
科研通智能强力驱动
Strongly Powered by AbleSci AI