In this paper, we study the performance of robust learning with Huber loss. As an alternative to traditional empirical risk minimization schemes, Huber regression has been extensively used in machine learning. A new comparison theorem is established in the paper, which characterizes the gap between the excess generalization error and the prediction error. In addition, we refine the error bounds from the perspective of statistical learning theory and improve the convergence rates in the presence of heavy-tailed noise. It is worth mentioning that a new moment condition E[|Y|1+∊|X=x]∈LρX2 is employed in analysis of error bound and learning rates from a theoretical viewpoint.