虚拟现实
计算机科学
编码(社会科学)
学位(音乐)
计算机图形学(图像)
计算机视觉
人工智能
数学
声学
统计
物理
作者
Yuanyuan Xu,Taoyu Yang,Zengjie Tan,Haolun Lan
标识
DOI:10.1109/icassp43922.2022.9746379
摘要
Panoramic or 360-degree virtual reality videos have high resolution, frame rate, and visual quality that demand efficient coding. Although a user watching a 360-degree video can switch viewing angles, only a portion of the video in the user's Field of View (FoV) is displayed at any time. In this paper, we propose an FoV-based coding scheme for 360-degree videos, which allocates more bits to tiles of the predicted FoV area than other tiles. Taking possible FoV prediction error into account, the proposed scheme aims to minimize the expected weighted distortion of the FoV region, where different weights are given to tiles at different locations representing the influence of projection from spherical domain to the 2D plane. Accordingly, an adaptive tile-level quantization parameter (QP) selection scheme is derived. Simulation results demonstrate the effectiveness of the proposed scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI