计算机科学
人工智能
图像(数学)
模式识别(心理学)
机器学习
作者
Ilknur Tuncer,Şengül Doğan,Türker Tuncer
标识
DOI:10.1016/j.eswa.2024.124685
摘要
We are living in the information era. Therefore, intelligence-based researchers are hot-topic such as artificial intelligence. In the artificial intelligence research area, machine learning and deep learning models have frequently used to create intelligence assistants and deep learning is the shining star of the AI. Specifically, in the computer vision, numerous deep learning models have been proposed, leading to a competition between transformers and convolutional neural networks (CNNs). Since the introduction of Vision Transformers (ViT), many transformer models have been advocated for computer vision, often overshadowing CNNs. Therefore, it is crucial to propose CNNs to showcase their prowess in image classification. This research introduces a lightweight CNN named MobileDenseNeXt. The proposed MobileDenseNeXt comprises four main blocks: (i) input, (ii) main, (iii) average pooling-based downsampling, and (iv) output. This research also incorporates convolution-based residual blocks and uses a depth concatenation layer to increase the number of filters. For downsampling, an average pooling operation has been employed, similar to the original DenseNet. Furthermore, the swish activation function is utilized in the presented CNN. MobileDenseNeXt has approximately 1.4 million learnable parameters, categorizing it as a lightweight CNN model. Additionally, a deep feature engineering approach has been developed using MobileDenseNeXt, incorporating two feature extractors with global average pooling and dropout layers, along with 10 feature selectors, to demonstrate the transfer learning capabilities of MobileDenseNeXt. The recommended models achieved over 95% test classification accuracy on the used three datasets, unequivocally demonstrating the high image classification proficiency of the proposed MobileDenseNeXt. Moreover, to show general classification ability of the proposed model, MobileDenseNeXt was trained on the CIFAR10 dataset and reached 98.62% accuracy. This research not only highlights the efficiency and effectiveness of MobileDenseNeXt in biomedical image classification but also highlights the competitive potential of this model for computer vision.
科研通智能强力驱动
Strongly Powered by AbleSci AI