计算机科学
人机交互
适应(眼睛)
补语(音乐)
人工智能
机器学习
黑匣子
任务(项目管理)
人机系统
视觉分析
学习分析
互动学习
可视化
多媒体
工程类
基因
光学
物理
表型
生物化学
化学
互补
系统工程
作者
Nadia Boukhelifa,Anastasia Bezerianos,Évelyne Lutton
出处
期刊:Human-computer interaction series
日期:2018-01-01
卷期号:: 341-360
被引量:23
标识
DOI:10.1007/978-3-319-90403-0_17
摘要
The evaluation of interactive machine learning systems remains a difficult task. These systems learn from and adapt to the human, but at the same time, the human receives feedback and adapts to the system. Getting a clear understanding of these subtle mechanisms of co-operation and co-adaptation is challenging. In this chapter, we report on our experience in designing and evaluating various interactive machine learning applications from different domains. We argue for coupling two types of validation: algorithm-centered analysis, to study the computational behaviour of the system; and human-centered evaluation, to observe the utility and effectiveness of the application for end-users. We use a visual analytics application for guided search, built using an interactive evolutionary approach, as an exemplar of our work. Our observation is that human-centered design and evaluation complement algorithmic analysis, and can play an important role in addressing the "black-box" effect of machine learning. Finally, we discuss research opportunities that require human-computer interaction methodologies, in order to support both the visible and hidden roles that humans play in interactive machine learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI