计算机科学
初始化
启发式
纹理映射
纹理图谱
人工智能
纹理合成
纹理过滤
计算机视觉
外观
集合(抽象数据类型)
渲染(计算机图形)
分割
计算机图形学(图像)
图像纹理
图像分割
地理
程序设计语言
操作系统
考古
作者
Wenhan Xiong,Hongqian Zhang,Biao Peng,Ziyu Hu,Yongli Wu,Jun Guo,Hui Huang
摘要
Coarse architectural models are often generated at scales ranging from individual buildings to scenes for downstream applications such as Digital Twin City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as twins from 3D dense reconstructions. However, these models typically lack realistic texture relative to the real building or scene, making them unsuitable for vivid display or direct reference. In this paper, we present TwinTex , the first automatic texture mapping framework to generate a photorealistic texture for a piece-wise planar proxy. Our method addresses most challenges occurring in such twin texture generation. Specifically, for each primitive plane, we first select a small set of photos with greedy heuristics considering photometric quality, perspective quality and facade texture completeness. Then, different levels of line features (LoLs) are extracted from the set of selected photos to generate guidance for later steps. With LoLs, we employ optimization algorithms to align texture with geometry from local to global. Finally, we fine-tune a diffusion model with a multi-mask initialization component and a new dataset to inpaint the missing region. Experimental results on many buildings, indoor scenes and man-made objects of varying complexity demonstrate the generalization ability of our algorithm. Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
科研通智能强力驱动
Strongly Powered by AbleSci AI