Zhuoyi Li,B.-Q. Chen,Zhu Ning,Wenjun Li,Tianming Liu,Lei Guo,Junwei Han,Tuo Zhang,Zhi-Qiang Yan
出处
期刊:IEEE Transactions on Instrumentation and Measurement [Institute of Electrical and Electronics Engineers] 日期:2025-01-01卷期号:: 1-1
标识
DOI:10.1109/tim.2025.3527489
摘要
High-performance methods for automated detection of epileptic stereo-electroencephalography (SEEG) have important clinical research implications, improving the diagnostic efficiency and reducing physician burden. However, few studies have been able to consider the process of seizure propagation, thus failing to fully capture the deep representations and variations of SEEG in the temporal, spatial, and spectral domains. In this paper, we construct a novel long-term SEEG seizure dataset (XJSZ dataset), and propose Signal Embedding Temporal-Spatial-Spectral Transformer (SE-TSS-Transformer) framework. Firstly, we design signal embedding module to reduce feature dimensions and adaptively construct optimal representation for subsequent analysis. Secondly, we integrate unified multi-scale temporal-spatial-spectral analysis to capture multi-level, multi-domain deep features. Finally, we utilize the transformer encoder to learn the global relevance of features, enhancing the network's ability to express SEEG features. Experimental results demonstrate state-of-the-art detection performance on the XJSZ dataset, achieving sensitivity, specificity, and accuracy of 99.03 %, 99.34 %, and 99.03 %, respectively. Furthermore, we validate the scalability of the proposed framework on two public datasets of different signal sources, demonstrating the power of the SE-TSS-Transformer framework for capturing diverse multi-scale temporal-spatial-spectral patterns in seizure detection.