MMGInpainting: Multi-Modality Guided Image Inpainting Based on Diffusion Models

修补 计算机科学 人工智能 图像(数学) 模态(人机交互) 计算机视觉 模式识别(心理学)
作者
Cong Zhang,Wenxia Yang,Xin Li,Huan Han
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:26: 8811-8823 被引量:16
标识
DOI:10.1109/tmm.2024.3382484
摘要

Proper inference of semantics is necessary for realistic image inpainting. Most image inpainting methods use deep generative models, which require large image datasets to predict and generate content. However, predicting the missing regions and generating coherent content is difficult due to limited control. Existing approaches include image-guided or text-guided image inpainting, but none of them has taken both image and text as the guidance signals, as far as we know. To fill this gap, we propose a multi-modality guided (MMG) image inpainting approach based on the diffusion model. This MMGInpainting method uses both image and text as guidance for generating content within the target area for inpainting, effectively integrating the semantic information conveyed by the guiding image or text into the content of the inpainted region. To construct MMGInpainting, we start by enhancing the U-Net backbone with a customized Nonlinear Activation Free Network (NAFNet). This adapted NAFNet incorporates an Anchored Stripe Attention mechanism, which utilizes anchor points to effectively model global contextual dependencies. To regulate inpainting, we use a Semantic Fusion Encoder to guide the inverse process of the diffusion model. The process is iteratively executed to denoise and generate the desired inpainting result. Additionally, we explore how different modes of meaning interact and coordinate to offer users useful guidance for a more manageable inpainting procedure. Experimental results demonstrate that our approach produces faithful results adhering to the guiding information, while significantly improving computational efficiency. Github Repository: https://github.com/skipper-zc/MMGInpainting/
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
21发布了新的文献求助10
刚刚
于子超完成签到,获得积分10
刚刚
海之语完成签到,获得积分10
刚刚
酷波er应助wtc采纳,获得10
刚刚
此生长安发布了新的文献求助10
1秒前
1秒前
1秒前
坚定傲珊发布了新的文献求助10
1秒前
1秒前
量子星尘发布了新的文献求助10
2秒前
wzc完成签到 ,获得积分10
2秒前
2秒前
Daisy发布了新的文献求助10
3秒前
3秒前
4秒前
4秒前
22发布了新的文献求助10
4秒前
博士牲牛马完成签到,获得积分10
4秒前
4秒前
善学以致用应助lize5493采纳,获得10
5秒前
顾矜应助xiao采纳,获得10
5秒前
tsw完成签到,获得积分10
5秒前
飞翔的霸天哥应助carl采纳,获得30
5秒前
Huihuang_He发布了新的文献求助10
5秒前
闾丘志泽发布了新的文献求助30
6秒前
susu完成签到,获得积分10
6秒前
明亮惋庭完成签到,获得积分10
8秒前
8秒前
火星上的迎天完成签到,获得积分10
8秒前
8秒前
搜集达人应助weiliu采纳,获得10
8秒前
NoNoQ完成签到,获得积分10
8秒前
小胖饼饼发布了新的文献求助10
9秒前
10秒前
慕青应助无辜的从云采纳,获得30
11秒前
烟花应助王明月采纳,获得10
12秒前
神勇的荟发布了新的文献求助10
13秒前
13秒前
隐形曼青应助jiao采纳,获得10
13秒前
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1561
Specialist Periodical Reports - Organometallic Chemistry Organometallic Chemistry: Volume 46 1000
Current Trends in Drug Discovery, Development and Delivery (CTD4-2022) 800
Foregrounding Marking Shift in Sundanese Written Narrative Segments 600
Holistic Discourse Analysis 600
Beyond the sentence: discourse and sentential form / edited by Jessica R. Wirth 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5519632
求助须知:如何正确求助?哪些是违规求助? 4611732
关于积分的说明 14529813
捐赠科研通 4549100
什么是DOI,文献DOI怎么找? 2492759
邀请新用户注册赠送积分活动 1473857
关于科研通互助平台的介绍 1445710