Benchmarking dynamic neural-network models of the human speed-accuracy tradeoff

计算机科学 水准点(测量) 延迟(音频) 人工神经网络 计算 人工智能 航程(航空) 算法 模式识别(心理学) 大地测量学 电信 复合材料 材料科学 地理
作者
Ajay Subramanian,Elena Sizikova,Omkar Kumbhar,Najib J. Majaj,Denis G. Pelli
出处
期刊:Journal of Vision [Association for Research in Vision and Ophthalmology (ARVO)]
卷期号:22 (14): 4359-4359 被引量:2
标识
DOI:10.1167/jov.22.14.4359
摘要

People take a variable amount of time (0.1 - 10 s) to recognize an object and can trade speed for accuracy. Various time-constrained tasks demand a wide range of accuracy and latency. Previous work (Spoerer’20) has modeled only modest speed-accuracy tradeoffs (SATs) with a min-to-max range of merely 6% accuracy and 200 ms reaction time, a tiny fraction of the human range. Here, we collect and present a public human benchmark where we use image perturbations to adjust task difficulty and increase the accuracy range to more than 50%. Furthermore, we show that dynamic neural networks are a promising model of the SAT and capture the behavior without needing recurrence. 142 online participants categorized CIFAR-10 images with controlled reaction time. Reaction time (RT) was defined as the elapsed time between stimulus presentation and a keypress response. We ran 5 blocks of 300 trials, each with a different reaction time from 200-1000 ms and repeated the experiment with 4 different viewing conditions: color, grayscale, noise, and blur. Three networks: MSDNet (Huang’17), SCAN (Zhang’19), and ConvRNN (Spoerer’20) were trained on CIFAR-10 image classification. Using FLOPs as an analogue for human reaction time, we tested these networks by forcing them to “respond” using different amounts of computation, across all viewing conditions. We compared the three networks and humans using two metrics: accuracy range (difference between maximum and minimum accuracy when reaction time is varied) and correlation between speed-accuracy trade-off curves. MSDNet gives a better account than previous attempts without needing recurrence. When trained with noise, it shows high correlation (0.93) with human SAT. However, humans are much more flexible, with a large 51% accuracy range while the best network, MSDNet trained with noise, shows only 19%. Thus, our benchmark presents a challenging goal for future work that aims to model SAT.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
思源应助asdfqwer采纳,获得10
1秒前
Wait完成签到,获得积分10
1秒前
wing完成签到 ,获得积分10
2秒前
2秒前
高挑的听南完成签到,获得积分10
2秒前
犯困嫌疑人完成签到,获得积分10
3秒前
shelemi发布了新的文献求助10
3秒前
初a完成签到,获得积分10
3秒前
fujikaze完成签到,获得积分10
3秒前
Linzi完成签到,获得积分10
3秒前
大力的天佑完成签到 ,获得积分10
3秒前
老李完成签到,获得积分10
3秒前
是小浩啊完成签到,获得积分10
4秒前
神仙也抠脚丫完成签到,获得积分10
4秒前
花青完成签到,获得积分10
4秒前
spw完成签到,获得积分10
5秒前
忽忽完成签到,获得积分10
5秒前
啊这完成签到,获得积分10
5秒前
朴实以松完成签到,获得积分10
5秒前
JamesPei应助开朗大地采纳,获得10
5秒前
三两白米饭完成签到,获得积分10
5秒前
sixseven完成签到,获得积分10
5秒前
5秒前
Skywalker完成签到,获得积分10
6秒前
徐佳达完成签到,获得积分10
6秒前
神勇绮烟完成签到 ,获得积分10
7秒前
孤单心事完成签到,获得积分10
7秒前
cnulee完成签到,获得积分10
7秒前
7秒前
萤火虫完成签到,获得积分10
7秒前
SciGPT应助肥肉草采纳,获得10
8秒前
赵西里完成签到,获得积分10
8秒前
扶溪筠完成签到,获得积分10
8秒前
suxiang完成签到,获得积分10
8秒前
天才小榴莲完成签到,获得积分10
8秒前
聪明梦松完成签到,获得积分10
9秒前
9秒前
呆萌的莲完成签到,获得积分10
9秒前
自由饼干完成签到,获得积分10
9秒前
纪外绣完成签到,获得积分10
11秒前
高分求助中
Encyclopedia of Quaternary Science Third edition 2025 12000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Social Work Ethics Casebook: Cases and Commentary (revised 2nd ed.). Frederic G. Reamer 800
Beyond the sentence : discourse and sentential form / edited by Jessica R. Wirth 600
Holistic Discourse Analysis 600
Vertébrés continentaux du Crétacé supérieur de Provence (Sud-Est de la France) 600
Vertebrate Palaeontology, 5th Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5337004
求助须知:如何正确求助?哪些是违规求助? 4474294
关于积分的说明 13923554
捐赠科研通 4369116
什么是DOI,文献DOI怎么找? 2400580
邀请新用户注册赠送积分活动 1393641
关于科研通互助平台的介绍 1365542