计算机科学
自动对焦
光学(聚焦)
背景(考古学)
过程(计算)
计算机视觉
帧(网络)
帧速率
人工智能
接口(物质)
人机交互
实时计算
生物
操作系统
光学
物理
最大气泡压力法
古生物学
气泡
并行计算
电信
作者
Abdullah Abuolaim,Abhijith Punnappurath,Michael S. Brown
标识
DOI:10.1007/978-3-030-01267-0_32
摘要
Autofocus (AF) on smartphones is the process of determining how to move a camera’s lens such that certain scene content is in focus. The underlying algorithms used by AF systems, such as contrast detection and phase differencing, are well established. However, determining a high-level objective regarding how to best focus a particular scene is less clear. This is evident in part by the fact that different smartphone cameras employ different AF criteria; for example, some attempt to keep items in the center in focus, others give priority to faces while others maximize the sharpness of the entire scene. The fact that different objectives exist raises the research question of whether there is a preferred objective. This becomes more interesting when AF is applied to videos of dynamic scenes. The work in this paper aims to revisit AF for smartphones within the context of temporal image data. As part of this effort, we describe the capture of a new 4D dataset that provides access to a full focal stack at each time point in a temporal sequence. Based on this dataset, we have developed a platform and associated application programming interface (API) that mimic real AF systems, restricting lens motion within the constraints of a dynamic environment and frame capture. Using our platform we evaluated several high-level focusing objectives and found interesting insight into what users prefer. We believe our new temporal focal stack dataset, AF platform, and initial user-study findings will be useful in advancing AF research.
科研通智能强力驱动
Strongly Powered by AbleSci AI