接口
人机系统
操作化
计算机科学
相关性(法律)
鉴定(生物学)
人机交互
人工智能
公制(单位)
知识管理
过程管理
机器学习
工程类
哲学
运营管理
植物
认识论
政治学
计算机硬件
法学
生物
作者
J. Ernest Wilkins,D. A. Sparrow,Caitlan A. Fealing,Brian D. Vickers,Kristina A. Ferguson,Heather Wojton
摘要
Abstract We propose and present a parallelized metric framework for evaluating human‐machine teams that draws upon current knowledge of human‐systems interfacing and integration but is rooted in team‐centric concepts. Humans and machines working together as a team involves interactions that will only increase in complexity as machines become more intelligent, capable teammates. Assessing such teams will require explicit focus on not just the human‐machine interfacing but the full spectrum of interactions between and among agents. As opposed to focusing on isolated qualities, capabilities, and performance contributions of individual team members, the proposed framework emphasizes the collective team as the fundamental unit of analysis and the interactions of the team as the key evaluation targets, with individual human and machine metrics still vital but secondary. With teammate interaction as the organizing diagnostic concept, the resulting framework arrives at a parallel assessment of the humans and machines, analyzing their individual capabilities less with respect to purely human or machine qualities and more through the prism of contributions to the team as a whole. This treatment reflects the increased machine capabilities and will allow for continued relevance as machines develop to exercise more authority and responsibility. This framework allows for identification of features specific to human‐machine teaming that influence team performance and efficiency, and it provides a basis for operationalizing in specific scenarios. Potential applications of this research include test and evaluation of complex systems that rely on human‐system interaction, including—though not limited to—autonomous vehicles, command and control systems, and pilot control systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI