计算机科学
人工智能
人工神经网络
模式识别(心理学)
平滑度
图像(数学)
上下文图像分类
机器学习
数学
数学分析
作者
Prashnna Kumar Gyawali,Sandesh Ghimire,Pradeep Bajracharya,Zhiyuan Li,Linwei Wang
标识
DOI:10.1007/978-3-030-59710-8_59
摘要
Computer-aided diagnosis via deep learning relies on large-scale annotated data sets, which can be costly when involving expert knowledge. Semi-supervised learning (SSL) mitigates this challenge by leveraging unlabeled data. One effective SSL approach is to regularize the local smoothness of neural functions via perturbations around single data points. In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL. We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data, at both the input and latent space in order to regularize different portions of the network. We evaluated the presented model on two distinct medical image data sets for semi-supervised classification of thoracic disease and skin lesion, demonstrating its improved performance over SSL with local perturbations and SSL with global mixing but at the input space only. Our code is available at https://github.com/Prasanna1991/LatentMixing.
科研通智能强力驱动
Strongly Powered by AbleSci AI