Most of the existing classification techniques generally requires the consistent distribution assumption between training and testing samples. However, recent results theoretically reveal that enhanced classification performance may be achieved by breaking this assumption and meanwhile managing to satisfy a subtle assumption between a prediction function, training and testing samples. Although such a subtle assumption is too hard to be leveraged as a criterion for designing a single-view classifier, this study as the first attempt exhibits its natural yet distinctive value in designing a two-view classifier. In this study, originating from the inconsistent distribution assumption between training and testing samples, a new mutually teachable classification criterion is proposed, and accordingly a two-view deep interpretable Tagaki-Sugeno-Kang fuzzy classifier called Tvd-TFC is developed. In order to keep both promising classification performance and high interpretability of Tvd-TFC, it simply takes our recent work-- deep Tagaki-Sugeno-Kang fuzzy classifier (D-TSK-FC) as a basic component of each deep sub-classifier for each view. The distinctive novelty of Tvd-TFC exists in that its double deep structures along with two respective views are interchangeably learnt in deep learning manner according to the proposed mutually teachable classification criterion. The proposed learning algorithm can not only minimize the testing error along with each view but also ensure the consistency between two views. Experimental results on two-view datasets demonstrate that the proposed classifier Tvd-TFC realizes enhanced or at least comparable classification performance and simultaneously has better interpretability in contrast to the comparative classifiers.