计算机科学
人工智能
杠杆(统计)
计算机视觉
一致性(知识库)
修补
先验概率
代表(政治)
图像(数学)
计算机图形学(图像)
贝叶斯概率
政治
政治学
法学
作者
Jingbo Zhang,Xiaoyu Li,Ziyu Wan,Can Wang,Jing Liao
出处
期刊:IEEE Transactions on Visualization and Computer Graphics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-14
被引量:15
标识
DOI:10.1109/tvcg.2024.3361502
摘要
Text-driven 3D scene generation is widely applicable to video gaming, film industry, and metaverse applications that have a large demand for 3D scenes. However, existing text-to-3D generation methods are limited to producing 3D objects with simple geometries and dreamlike styles that lack realism. In this work, we present Text2NeRF, which is able to generate a wide range of 3D scenes with complicated geometric structures and high-fidelity textures purely from a text prompt. To this end, we adopt NeRF as the 3D representation and leverage a pre-trained text-to-image diffusion model to constrain the 3D reconstruction of the NeRF to reflect the scene description. Specifically, we employ the diffusion model to infer the text-related image as the content prior and use a monocular depth estimation method to offer the geometric prior. Both content and geometric priors are utilized to update the NeRF model. To guarantee textured and geometric consistency between different views, we introduce a progressive scene inpainting and updating strategy for novel view synthesis of the scene. Our method requires no additional training data but only a natural language description of the scene as the input. Extensive experiments demonstrate that our Text2NeRF outperforms existing methods in producing photo-realistic, multi-view consistent, and diverse 3D scenes from a variety of natural language prompts. Our code and model will be available upon acceptance.
科研通智能强力驱动
Strongly Powered by AbleSci AI