Abstract Neural networks and random forests are popular and promising tools for machine learning. This article explores the proper integration of these two approaches for nonparametric regression to improve the performance of a single approach. Specifically, we propose a neural network estimator with local enhancement provided by random forests. It naturally synthesizes the local relation adaptivity of random forests and the strong global approximation ability of neural networks. Based on the classical empirical risk minimization framework, we establish a nonasymptotic error bound for the estimator. By utilizing advanced U-process theory and an appropriate network structure, we can further improve the convergence rate to the nearly minimax rate. Also with the assistance of random forests, we can implement gradient learning with neural networks. Comprehensive simulation studies and real data applications demonstrate the superiority of our proposal.