修补
计算机科学
人工智能
概化理论
可扩展性
任务(项目管理)
图像(数学)
过程(计算)
情态动词
计算机视觉
概率逻辑
模式识别(心理学)
机器学习
数学
工程类
统计
化学
系统工程
数据库
高分子化学
操作系统
作者
Shiyuan Yang,Xiao Chen,Jing Liao
标识
DOI:10.1145/3581783.3612200
摘要
Recently, text-to-image denoising diffusion probabilistic models (DDPMs) have demonstrated impressive image generation capabilities and have also been successfully applied to image inpainting. However, in practice, users often require more control over the inpainting process beyond textual guidance, especially when they want to composite objects with customized appearance, color, shape, and layout. Unfortunately, existing diffusion-based inpainting methods are limited to single-modal guidance and require task-specific training, hindering their cross-modal scalability. To address these limitations, we propose Uni-paint, a unified framework for multimodal inpainting that offers various modes of guidance, including unconditional, text-driven, stroke-driven, exemplar-driven inpainting, as well as a combination of these modes. Furthermore, our Uni-paint is based on pretrained Stable Diffusion and does not require task-specific training on specific datasets, enabling few-shot generalizability to customized images. We have conducted extensive qualitative and quantitative evaluations that show our approach achieves comparable results to existing single-modal methods while offering multimodal inpainting capabilities not available in other methods. Code is available at https://github.com/ysy31415/unipaint.
科研通智能强力驱动
Strongly Powered by AbleSci AI