ResaPred: A Deep Residual Network With Self-Attention to Predict Protein Flexibility
灵活性(工程)
残余物
计算机科学
人工智能
深度学习
生物系统
算法
数学
生物
统计
作者
Wei Wang,Shitong Wan,Hu Jin,Dong Liu,Hongjun Zhang,Yun Zhou,Xianfang Wang
标识
DOI:10.1109/tcbbio.2024.3515200
摘要
Grasping the intrinsic properties of protein structure is crucial for comprehending relevant biological mechanisms, with protein flexibility standing out as a critical aspect. Therefore, the prediction of protein flexibility is of great importance in understanding molecular mechanisms. We propose a deep learning method named ResaPred, which extracts diverse features from protein sequences, such as secondary structure, torsion angle, solvent accessibility, etc. ResaPred is a novel deep network based on a modified 1D residual module and a self-attention mechanism, which effectively extracts deep key features related to flexibility. The modified 1D residual module consists of three convolution layers, with batchnorm and relu layers added after each layer to prevent gradient explosion or vanishing. Incorporating self-attention mechanisms into neural network architectures introduces a significant advantage in capturing long-range dependencies within sequential data. We conduct experiments on the non-strict and strict cases, and achieve state-of-the-art results in predicting flexibility compared to existing methods. Furthermore, we extended our analysis to explore the correlation between protein secondary structure and solvent accessibility with flexibility. Finally, we used two important viral proteins as case studies, confirming the effectiveness of our method in recognizing the flexibility of protein structures.