Neural networks have been successfully applied in numerous domains with the help of high-quality training samples. However, datasets containing noises and outliers (i.e., corrupted samples) are ubiquitous in the real world. When using these datasets as training samples, most neural networks exhibit poor predictive performance. In this paper, motivated by the modal regression, we propose a Modal Neural Network, which is robust to corrupted samples. Specifically, the modal neural network can reveal the most likely trends of training samples without overfitting the corrupted samples. On the theoretical side, we establish the generalization error bounds of the proposed method with Rademacher complexity. On the experimental side, the numerical results demonstrate our method yields substantial effectiveness and robustness to different levels of corruption on both synthetic and real-world benchmark datasets. Furthermore, our method, as a plug-and-play algorithm, can be readily applied to most neural network architectures and optimizers.