对比度(视觉)
计算机科学
人工智能
剪裁(形态学)
相似性(几何)
卷积神经网络
特征(语言学)
特征提取
模式识别(心理学)
机器学习
领域(数学)
样品(材料)
特征学习
图像(数学)
数学
哲学
语言学
化学
色谱法
纯数学
作者
Jian Guo,Jiaxin An,Yuna Yu,Aidi Liu,Yabian Liu
标识
DOI:10.1109/icaica58456.2023.10405431
摘要
Recent self-supervised contrast learning methods have made significant progress in the field of computer vision, and these methods aim to learn useful feature representations from large-scale unlabeled data, providing a powerful foundation for a variety of vision tasks. Contrast learning methods drive feature learning by maximizing the similarity of positive sample pairs and minimizing the similarity of negative sample pairs. However, one of the keys to getting a contrast learning framework with superior performance is well-designed contrast pairs. Traditional methods often simply apply random clipping to generate different clipping samples, but this can lead to insufficient semantic information in the clipped area. To solve this problem, the SECL method is proposed to determine the clipping region by measuring the importance of each region in the image, to ensure that the clipped region contains rich semantic information. In addition, convolutional neural networks often have difficulty capturing global information, so self-attention mechanisms are further designed to enhance the feature extraction network's perception of global information. Experiments show that our method can effectively improve the classification accuracy of the classical contrast learning framework on multiple data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI