计算机科学
初始化
判别式
人工智能
特征(语言学)
模式识别(心理学)
变更检测
特征提取
像素
特征学习
编码器
卷积神经网络
机器学习
哲学
语言学
程序设计语言
操作系统
作者
Fenlong Jiang,Maoguo Gong,Hanhong Zheng,Tongfei Liu,Mingyang Zhang,Jialu Liu
标识
DOI:10.1109/tgrs.2023.3238327
摘要
Self-supervised contrastive learning (CL) can learn high-quality feature representations that are beneficial to downstream tasks without labeled data. However, most CL methods are for image-level tasks. For the fine-grained change detection (FCD) tasks, such as change or change trend detection of some specific ground objects, it is usually necessary to perform pixel-level discriminative analysis. Therefore, feature representations learned by image-level CL may have limited effects on FCD. To address this problem, we propose a self-supervised global–local contrastive learning (GLCL) framework, which extends the instance discrimination task to the pixel level. GLCL follows the current mainstream CL paradigm and consists of four parts, including data augmentation to generate different views of the input, an encoder network for feature extraction, a global CL head, and a local CL head to perform image-level and pixel-level instance discrimination tasks, respectively. Through GLCL, features belonging to different perspectives of the same instance will be pulled closer, while features of different instances will be alienated, which can enhance the discriminativeness of feature representations from both global and local perspectives, thereby facilitating downstream FCD tasks. In addition, GLCL makes a targeted structural adaptation to FCD, i.e., the encoder network is undertaken by the common backbone networks of FCD, which can accelerate the deployment on downstream FCD tasks. Experimental results on several real datasets show that compared with other parameter initialization methods, the FCD models pretrained by GLCL can obtain better detection performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI