Proximal support vector machine via generalized eigenvalues (GEPSVM) is one of the most successful methods for classification problems. However, GEPSVM is vulnerable to outliers since it learns classifiers based on the squared L 2 -norm distance without a specific strategy to deal with the outliers. Motivated by existing studies that improve the robustness of GEPSVM via the L 1 -norm distance or not-squared L 2 -norm distance formulation, a novel GEPSVM formulation that minimizes the p -order of L 2 -norm distance is proposed, namely, L 2,p -GEPSVM. This formulation weakens the negative effects of both light and heavy outliers in the data. An iterative algorithm is designed to solve the general L 2,p -norm distance minimization problems and rigorously prove its convergence. In addition, we adjust the parameters of L 2,p -GEPSVM to balance the accuracy and training time. This is especially useful for larger datasets. Extensive results indicate that the L 2,p -GEPSVM improves the classification performance and robustness in various experimental settings.