舞蹈
人工智能
计算机科学
不变(物理)
计算机视觉
跟踪(教育)
深度学习
姿势
鉴定(生物学)
模式识别(心理学)
数学
心理学
艺术
植物
生物
文学类
数学物理
教育学
作者
Hsuan-I Ho,Minho Shim,Dongyoon Wee
标识
DOI:10.1109/icassp40776.2020.9054086
摘要
Most existing multi-person tracking approaches rely on appearance based re-identification (re-ID) to resolve fragmented tracklets. However, simply using appearance information could be insufficient for videos containing severe pose changes, such as sports or dance videos. With the goal of learning pose-invariant representations, we propose an end-to-end deep learning framework Sparse-Temporal ReID Network. Our proposed network not only realizes human pose disentanglement in an image recovery manner, but also makes efficient linkages between the identical subjects via a unique Sparse temporal identity sampling technique across time steps. Experimental results demonstrate the effectiveness of our proposed method on both multi-view re-ID benchmarks and our newly collected dance video dataset DanceReID 1 .
科研通智能强力驱动
Strongly Powered by AbleSci AI