保险丝(电气)
人工智能
计算机科学
图层(电子)
图像融合
图像(数学)
融合
人工神经网络
特征(语言学)
光学(聚焦)
模式识别(心理学)
比例(比率)
深度学习
网络体系结构
计算机视觉
工程类
哲学
物理
有机化学
化学
光学
电气工程
量子力学
语言学
计算机安全
作者
Fayez Lahoud,Sabine Süsstrunk
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:23
标识
DOI:10.48550/arxiv.1905.03590
摘要
We propose a real-time image fusion method using pre-trained neural networks. Our method generates a single image containing features from multiple sources. We first decompose images into a base layer representing large scale intensity variations, and a detail layer containing small scale changes. We use visual saliency to fuse the base layers, and deep feature maps extracted from a pre-trained neural network to fuse the detail layers. We conduct ablation studies to analyze our method's parameters such as decomposition filters, weight construction methods, and network depth and architecture. Then, we validate its effectiveness and speed on thermal, medical, and multi-focus fusion. We also apply it to multiple image inputs such as multi-exposure sequences. The experimental results demonstrate that our technique achieves state-of-the-art performance in visual quality, objective assessment, and runtime efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI