编码器
计算机科学
变压器
增采样
量化(信号处理)
修补
人工智能
安全性令牌
像素
忠诚
计算机视觉
图像(数学)
物理
量子力学
电压
操作系统
电信
计算机安全
作者
Qiankun Liu,Yuqi Jiang,Zhentao Tan,Dongdong Chen,Ying Fu,Qi Chu,Gang Hua,Nenghai Yu
标识
DOI:10.1109/tpami.2024.3384406
摘要
Transformer based methods have achieved great success in image inpainting recently. However, we find that these solutions regard each pixel as a token, thus suffering from an information loss issue from two aspects: 1) They downsample the input image into much lower resolutions for efficiency consideration. 2) They quantize 256 3 RGB values to a small number (such as 512) of quantized color values. The indices of quantized pixels are used as tokens for the inputs and prediction targets of the transformer. To mitigate these issues, we propose a new transformer based framework called "PUT". Specifically, to avoid input downsampling while maintaining computation efficiency, we design a patch-based auto-encoder P-VQVAE. The encoder converts the masked image into non-overlapped patch tokens and the decoder recovers the masked regions from the inpainted tokens while keeping the unmasked regions unchanged. To eliminate the information loss caused by input quantization, an Un-quantized Transformer is applied. It directly takes features from the P-VQVAE encoder as input without any quantization and only regards the quantized tokens as prediction targets.Furthermore, to make the inpainting process more controllable, we introduce semantic and structural conditions as extra guidance. Extensive experiments show that our method greatly outperforms existing transformer based methods on image fidelity and achieves much higher diversity and better fidelity than state-of-the-art pluralistic inpainting methods on complex large-scale datasets ( e.g. , ImageNet). Codes are available at https://github.com/liuqk3/PUT .
科研通智能强力驱动
Strongly Powered by AbleSci AI