视区
计算机科学
用户体验设计
人工智能
计算机图形学(图像)
计算机视觉
人机交互
作者
Jinyu Chen,Xianzhuo Luo,Miao Hu,Di Wu,Yipeng Zhou
标识
DOI:10.1109/tmm.2020.3033127
摘要
In 360-degree video streaming, users commonly watch a video scene within a Field of View (FoV) . Such observation provides an opportunity to save bandwidth consumption by predicting and then prefetching video tiles within the FoV. However, existing FoV prediction methods seldom consider the diversity among user behaviors and the impact of different video genres. Thus, previous one-size-fits-all models cannot make accurate prediction for users with different behavior patterns. In this paper, we propose a user-aware viewport prediction algorithm called Sparkle , which is a practical whitebox approach for FoV prediction. Instead of training a single learning model to predict the behaviors for all users, our proposed algorithm is tailored to fit each individual user. In particular, unlike other learning models, our prediction model is completely explainable and all the parameters have their physical meanings. We first conduct a measurement study to analyze real user behaviors and observe that there exists sharp fluctuation of view orientation and user posture has significant impact on the viewport movement of users. Moreover, cross-user similarity is diverse across different video genres. Inspired by these insights, we further design a user-aware viewport prediction algorithm by mimicking a user's viewport movement on the tile map, and determine how a user will change the viewport angle based on his (or her) trajectory and other similar users’ behaviors in the past time window. Extensive evaluations with real datasets demonstrate that, our proposed algorithm significantly outperforms the state-of-the-art benchmark methods (e.g., LSTM-based methods) by over $\text{5}\%$ , and the prediction accuracy is much more stable on various types of 360-degree videos than previous methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI