计算机科学
增采样
人工智能
Softmax函数
计算机视觉
像素
模式识别(心理学)
编码器
欠采样
深度学习
图像(数学)
操作系统
作者
Jawadul H. Bappy,Cody Simons,Lakshmanan Nataraj,B.S. Manjunath,Amit K. Roy–Chowdhury
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2019-01-25
卷期号:28 (7): 3286-3300
被引量:344
标识
DOI:10.1109/tip.2019.2895466
摘要
With advanced image journaling tools, one can easily alter the semantic meaning of an image by exploiting certain manipulation techniques such as copy-clone, object splicing, and removal, which mislead the viewers. In contrast, the identification of these manipulations becomes a very challenging task as manipulated regions are not visually apparent. This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones. Resampling features are used to capture artifacts like JPEG quality loss, upsampling, downsampling, rotation, and shearing. The proposed network exploits larger receptive fields (spatial maps) and frequency domain correlation to analyze the discriminative characteristics between manipulated and non-manipulated regions by incorporating encoder and LSTM network. Finally, decoder network learns the mapping from low-resolution feature maps to pixel-wise predictions for image tamper localization. With predicted mask provided by final layer (softmax) of the proposed architecture, end-to-end training is performed to learn the network parameters through back-propagation using ground-truth masks. Furthermore, a large image splicing dataset is introduced to guide the training process. The proposed method is capable of localizing image manipulations at pixel level with high precision, which is demonstrated through rigorous experimentation on three diverse datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI