Network Augmentation (NetAug) is a recent method used to improve the performance of tiny neural networks on large-scale datasets. This method provides additional supervision to tiny models from larger augmented models, mitigating the issue of underfitting. However, the capacity of the augmented models is not fully utilized, resulting in underutilization of resources. In order to fully utilize the capacity of a larger augmented model without exacerbating the underfitting of a tiny model, we propose a new method called Multi-Input Network Augmentation (MINA). MINA converts tiny neural networks into a multi-input configuration, allowing only the augmented model to receive more diverse inputs during training. Additionally, tiny neural network can be converted back into their original single-input configuration after training. Our extensive experiments on large-scale datasets demonstrate that MINA is effective in improving the performance of tiny neural networks. We also demonstrate that MINA is consistently effective in downstream tasks, such as fine-grained image classification tasks and object detection tasks.