计算机科学
人工神经网络
人工智能
深层神经网络
比例(比率)
对象(语法)
深度学习
机器学习
模式识别(心理学)
量子力学
物理
作者
Seokhyeon Ha,Yeongmo Kim,Jungwoo Lee
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:11: 106289-106298
标识
DOI:10.1109/access.2023.3319313
摘要
Network Augmentation (NetAug) is a recent method used to improve the performance of tiny neural networks on large-scale datasets. This method provides additional supervision to tiny models from larger augmented models, mitigating the issue of underfitting. However, the capacity of the augmented models is not fully utilized, resulting in underutilization of resources. In order to fully utilize the capacity of a larger augmented model without exacerbating the underfitting of a tiny model, we propose a new method called Multi-Input Network Augmentation (MINA). MINA converts tiny neural networks into a multi-input configuration, allowing only the augmented model to receive more diverse inputs during training. Additionally, tiny neural network can be converted back into their original single-input configuration after training. Our extensive experiments on large-scale datasets demonstrate that MINA is effective in improving the performance of tiny neural networks. We also demonstrate that MINA is consistently effective in downstream tasks, such as fine-grained image classification tasks and object detection tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI