人工智能
计算机视觉
计算机科学
纹理(宇宙学)
图像纹理
精炼(冶金)
模式识别(心理学)
图像(数学)
图像处理
化学
物理化学
作者
Said Fahri Altindis,Adil Meric,Yusuf Dalva,Uğur Güdükbay,Aysegul Dundar
标识
DOI:10.1109/tpami.2024.3456817
摘要
Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (uv) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements. Code and additional results: https://github.com/saidaltindis/RefineTex.
科研通智能强力驱动
Strongly Powered by AbleSci AI