In this paper, we propose a batch gradient neuro-fuzzy learning algorithm with smoothing regularization (BGNFSL0) for the first-order Takagi-Sugeno system. The regularization method usually tends to produce the sparsest solution, however, its solving is an NP-hard problem, and it cannot be directly used in designing the regularized gradient neuro-fuzzy learning method. By exploiting a series of smoothing functions to approximate the regularizer, the proposed BGNFSL0 successfully avoids the NP-hard nature of the original regularization method, while inheriting the advantage in producing the sparsest solution. In this way, BGNFSL0 can prune the network efficiently during the learning procedure and thus improve the generalization capability of the networks. By conducting simulations to compare it with several other popular regularization learning methods, it is found that BGNFSL0 exhibits the best performance in generating the parsimonious network as well as the generalization capability.