To interpret the deep learning based knowledge tracing models (DLKT), we introduce a generic method with four-step procedure. The proposed method and procedure are generally applicable to the DLKT models with diverse inner structures. The experiment results validate them on three existing knowledge tracing models, where the individual contributions of the input question-answer pairs to the models’ decision are properly calculated. By leverage the calculated interpreting results, we explore the key information hidden in the DLKT models.