可转让性
计算机科学
从头算
理论(学习稳定性)
分子动力学
范围(计算机科学)
人工智能
算法
机器学习
计算化学
化学
物理
量子力学
程序设计语言
罗伊特
作者
D. Liu,Jianzhong Wu,Diannan Lu
摘要
Machine learning potentials (MLPs) are poised to combine the accuracy of ab initio predictions with the computational efficiency of classical molecular dynamics (MD) simulation. While great progress has been made over the last two decades in developing MLPs, there is still much to be done to evaluate their model transferability and facilitate their development. In this work, we construct two deep potential (DP) models for liquid water near graphene surfaces, Model S and Model F, with the latter having more training data. A concurrent learning algorithm (DP-GEN) is adopted to explore the configurational space beyond the scope of conventional ab initio MD simulation. By examining the performance of Model S, we find that an accurate prediction of atomic force does not imply an accurate prediction of system energy. The deviation from the relative atomic force alone is insufficient to assess the accuracy of the DP models. Based on the performance of Model F, we propose that the relative magnitude of the model deviation and the corresponding root-mean-square error of the original test dataset, including energy and atomic force, can serve as an indicator for evaluating the accuracy of the model prediction for a given structure, which is particularly applicable for large systems where density functional theory calculations are infeasible. In addition to the prediction accuracy of the model described above, we also briefly discuss simulation stability and its relationship to the former. Both are important aspects in assessing the transferability of the MLP model.
科研通智能强力驱动
Strongly Powered by AbleSci AI