图灵
自动化
计算机科学
汽车工程
航空学
工程类
人机交互
程序设计语言
机械工程
作者
Ennio Cascetta,Armando Cartenì,Luigi Di Francesco
标识
DOI:10.1016/j.trc.2021.103499
摘要
• If AVs drove like humans, they would reduce interaction problems with drivers and passengers. • The ability of AVs not to be distinguished from a human driver was tested through a Turing approach. • A real on the road experiment with 550 university students was performed in Italy. • In most cases the Artificial Intelligence (AI) was indistinguishable from the human driver. • Artificial Intelligence of the cruise control is less recognizable than that of the lane keeping. Fully automated vehicles (AVs) are set to become a reality in future decades and changes are to be expected in user perceptions and behavior. While AV acceptability has been widely studied, changes in human drivers’ behavior and in passengers’ reactions have received less attention. It is not yet possible to ascertain the risk of driver behavioral changes such as overreaction, and the corresponding safety problems, in mixed traffic with partially AVs. Nor has there been proper investigation of the potential unease of car occupants trained for human control, when exposed to automatic maneuvers. The conjecture proposed in this paper is that automation Level 2 vehicles do not induce potentially adverse effects in traditional vehicle drivers’ behavior or in occupants’ reactions, provided that they are indistinguishable from human-driven vehicles. To this end, the paper proposes a Turing approach to test the “humanity” of automation Level 2 vehicles. The proposed test was applied to the results of an experimental campaign carried out in Italy: 546 car passengers were interviewed on board Level 2 cars in which they could not see the driver. They were asked whether a specific driving action (braking, accelerating, lane keeping) had been performed by the human driver or by the automatic on-board software under different traffic conditions (congestion and speed). Estimation results show that in most cases the interviewees were unable to distinguish the Artificial Intelligence (AI) from the human driver by observing random responses with a 95% significance level (proportion of success statistically equal to 50%). However, in the case of moderate braking and lane keeping at >100 km/h and in high traffic congestion, respondents recognized AI control from the human driver above pure chance, with 62–69% correct response rates. These findings, if confirmed in other case studies, could significantly impact on AVs acceptability, also contributing to their design as well as to long-debated ethical questions. AI driving software could be designed and tested for “humanity”, as long as safety is guaranteed, and autonomous cars could be allowed to circulate as long as they cannot be distinguished from human-driven vehicles in recurrent driving conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI