立体视
深度知觉
视觉搜索
人工智能
计算机科学
计算机视觉
双眼视差
分割
感知
心理学
神经科学
作者
Bochao Zou,Yue Liu,Jeremy M. Wolfe
出处
期刊:Vision Research
[Elsevier]
日期:2022-05-13
卷期号:198: 108061-108061
被引量:7
标识
DOI:10.1016/j.visres.2022.108061
摘要
Stereoscopic depth has a mixed record as a guiding attribute in visual attention. Visual search can be efficient if the target lies at a unique depth; whereas automatic segmentation of search arrays into different depth planes does not appear to be pre-attentive. These prior findings describe bottom-up, stimulus-driven depth guidance. Here, we ask about the top-down selection of depth information. To assess the ability to direct attention to specific depth planes, Experiment 1 used the centroid judgment paradigm which permits quantitative measures of selective processing of items of different depths or colors. Experiment 1 showed that a subset of observers could deploy specific attention filters for each of eight depth planes, suggesting that at least some observers can direct attention to a specific depth plane quite precisely. Experiment 2 used eight depth planes in a visual search experiment. Observers were encouraged to guide their attention to far or near depth planes with an informative but imperfect cue. The benefits of this probabilistic cue were small. However, this may not be a specific problem with guidance by stereoscopic depth. Equivalently poor results were obtained with color. To check and prove that depth guidance in search is possible, Experiment 3 presented items in only two depth planes. In this case, information about the target depth plane allows observers to search more efficiently, replicating earlier work. We conclude that top-down guidance by stereoscopic depth is possible but that it is hard to apply the full range of our stereoscopic ability in search.
科研通智能强力驱动
Strongly Powered by AbleSci AI