遥感
变更检测
计算机科学
计算机视觉
人工智能
地质学
作者
Lei Ding,Kun Zhu,Daifeng Peng,Hao Tang,Kuiwu Yang,Lorenzo Bruzzone
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-1
被引量:22
标识
DOI:10.1109/tgrs.2024.3368168
摘要
Vision Foundation Models (VFMs) such as the Segment Anything Model (SAM) allow zero-shot or interactive segmentation of visual contents, thus they are quickly applied in a variety of visual scenes.However, their direct use in many Remote Sensing (RS) applications is often unsatisfactory due to the special imaging properties of RS images.In this work, we aim to utilize the strong visual recognition capabilities of VFMs to improve change detection (CD) in very high-resolution (VHR) remote sensing images (RSIs).We employ the visual encoder of FastSAM, a variant of the SAM, to extract visual representations in RS scenes.To adapt FastSAM to focus on some specific ground objects in RS scenes, we propose a convolutional adaptor to aggregate the task-oriented change information.Moreover, to utilize the semantic representations that are inherent to SAM features, we introduce a task-agnostic semantic learning branch to model the semantic latent in bi-temporal RSIs.The resulting method, SAM-CD, obtains superior accuracy compared to the SOTA fully-supervised CD methods and exhibits a sampleefficient learning ability that is comparable to semi-supervised CD methods.To the best of our knowledge, this is the first work that adapts VFMs to CD in VHR RS images.
科研通智能强力驱动
Strongly Powered by AbleSci AI