作者
Megan A. K. Peters,Thomas Thesen,Yoshiaki Ko,Brian Maniscalco,Chad Carlson,Matthew Davidson,Werner Doyle,Ruben Kuzniecky,Orrin Devinsky,Eric Halgren,Hakwan Lau
摘要
Our perceptual experiences are accompanied by a subjective sense of certainty. These confidence judgements typically correlate meaningfully with the probability that the relevant decision is correct1,2,3,4,5,6, bolstering prevailing opinion that both perceptual decisions and confidence optimally reflect the probability of having made a correct decision6,7,8,9,10,11,12,13. However, recent behavioural reports suggest that confidence computations overemphasize information supporting a decision, while selectively down-weighting evidence for other possible choices14,15,16,17,18,19. This view remains controversial, and supporting neurobiological evidence has been lacking. Here we use intracranial electrophysiological recordings in humans together with machine-learning techniques to demonstrate that perceptual decisions and confidence rely on spatiotemporally separable neural representations in a face/house discrimination task. We then use normative computational models to show that confidence relies excessively on evidence supporting a decision (for example, face evidence for a ‘face’ decision), even while decisions themselves reflect the optimal balance of all evidence (for example, both face and house evidence). Thus, confidence may not reflect a readout of the probability of being correct; instead, observers may sacrifice optimality in favour of self-consistency20 in the face of limited neural and computational resources. Although seemingly suboptimal, this strategy may reflect the inference problem that perceptual systems are evolutionarily optimized to solve. Peters et al. use intracranial recordings and machine-learning techniques to show that human subjects under-use decision-incongruent evidence in the brain when computing perceptual confidence.