计算机科学
水准点(测量)
体验质量
领域(数学分析)
域适应
适应(眼睛)
特征(语言学)
时域
视频质量
卷积神经网络
人工智能
机器学习
实时计算
数据挖掘
服务质量
计算机视觉
计算机网络
公制(单位)
大地测量学
分类器(UML)
运营管理
地理
经济
数学
语言学
哲学
数学分析
物理
光学
作者
Leida Li,Pengfei Chen,Weisi Lin,Mai Xu,Guangming Shi
标识
DOI:10.1109/tip.2022.3190711
摘要
Due to the rapid increase in video traffic and relatively limited delivery infrastructure, end users often experience dynamically varying quality over time when viewing streaming videos. The user quality-of-experience (QoE) must be continuously monitored to deliver an optimized service. However, modern approaches for continuous-time video QoE estimation require densely annotating the continuous-time QoE labels, which is labor-intensive and time-consuming. To cope with such limitations, we propose a novel weakly-supervised domain adaptation approach for continuous-time QoE evaluation, by making use of a small amount of continuously labeled data in the source domain and abundant weakly-labeled data (only containing the retrospective QoE labels) in the target domain. Specifically, given a pair of videos from source and target domains, effective spatiotemporal segment-level feature representation is first learned by a combination of 2D and 3D convolutional networks. Then, a multi-task prediction framework is developed to simultaneously achieve continuous-time and retrospective QoE predictions, where a quality attentive adaptation approach is investigated to effectively alleviate the domain discrepancy without hampering the prediction performance. This approach is enabled by explicitly attending to the video-level discrimination and segment-level transferability in terms of the domain discrepancy. Experiments on benchmark databases demonstrate that the proposed method significantly improves the prediction performance under the cross-domain setting.
科研通智能强力驱动
Strongly Powered by AbleSci AI