计算机科学
蒸馏
一致性(知识库)
利用
人工智能
网(多面体)
卷积神经网络
人工神经网络
机器学习
数学
几何学
计算机安全
有机化学
化学
作者
Yingjie Tian,Shiding Sun,Jingjing Tang
标识
DOI:10.1016/j.neunet.2021.11.002
摘要
Multi-view learning aims to fully exploit the view-consistency and view-discrepancy for performance improvement. Knowledge Distillation (KD), characterized by the so-called "Teacher-Student" (T-S) learning framework, can transfer information learned from one model to another. Inspired by knowledge distillation, we propose a Multi-view Teacher-Student Network (MTS-Net), which combines knowledge distillation and multi-view learning into a unified framework. We first redefine the teacher and student for the multi-view case. Then the MTS-Net is built by optimizing both the view classification loss and the knowledge distillation loss in an end-to-end training manner. We further extend MTS-Net to image recognition tasks and present a multi-view Teacher-Student framework with convolutional neural networks called MTSCNN. To the best of our knowledge, MTS-Net and MTSCNN bring a new insight to extend the Teacher-Student framework to tackle the multi-view learning problem. We theoretically verify the mechanism of MTS-Net and MTSCNN and comprehensive experiments demonstrate the effectiveness of the proposed methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI