串联(数学)
计算机科学
人工智能
卷积神经网络
模式识别(心理学)
学习迁移
上下文图像分类
特征提取
深度学习
特征(语言学)
图像(数学)
人工神经网络
数学
语言学
组合数学
哲学
作者
Long D. Nguyen,Dongyun Lin,Zhiping Lin,Jiuwen Cao
标识
DOI:10.1109/iscas.2018.8351550
摘要
Deep convolutional neural networks (CNNs) have become one of the state-of-the-art methods for image classification in various domains. For biomedical image classification where the number of training images is generally limited, transfer learning using CNNs is often applied. Such technique extracts generic image features from nature image datasets and these features can be directly adopted for feature extraction in smaller datasets. In this paper, we propose a novel deep neural network architecture based on transfer learning for microscopic image classification. In our proposed network, we concatenate the features extracted from three pretrained deep CNNs. The concatenated features are then used to train two fully-connected layers to perform classification. In the experiments on both the 2D-Hela and the PAP-smear datasets, our proposed network architecture produces significant performance gains comparing to the neural network structure that uses only features extracted from single CNN and several traditional classification methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI