计算机科学
基础(证据)
人工智能
地质学
特征提取
萃取(化学)
特征(语言学)
地震学
模式识别(心理学)
遥感
语言学
色谱法
历史
哲学
考古
化学
作者
Xu Si,Xinming Wu,Hanlin Sheng,Jun Zhu,Zefeng Li
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:62: 1-13
被引量:4
标识
DOI:10.1109/tgrs.2024.3354456
摘要
In seismology, while training a specific deep learning model for each task is common, it often faces challenges such as the scarcity of labeled data and limited regional generalization. Addressing these issues, we introduce SeisCLIP: a foundation model for seismology, leveraging contrastive learning during pre-training on multi-modal data of seismic waveform spectra and the corresponding local and global event information. SeisCLIP consists of a transformer-based spectrum encoder and an MLP-based information encoder that are jointly pre-trained on massive data. During pre-training, contrastive learning aims to enhance representations by training two encoders to bring corresponding waveform spectra and event information closer in the feature space, while distancing uncorrelated pairs. Remarkably, the pre-trained spectrum encoder offers versatile features, enabling its application across diverse tasks and regions. Thus, it requires only modest datasets for fine-tuning to specific downstream tasks. Our evaluations demonstrate SeisCLIP's superior performance over baseline methods in tasks like event classification, localization, and focal mechanism analysis, even when using distinct datasets from various regions. In essence, SeisCLIP emerges as a promising foundational model for seismology, potentially revolutionizing foundation-model-based research in the domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI