This paper deals with the model-free adaptive control (MFAC) based on the reinforcement learning (RL) strategy for a family of discrete-time nonlinear processes. The controller is constructed based on the approximation ability of neural network architecture, a new actor-critic algorithm for neural network control problem is developed to estimate the strategic utility function and the performance index function. More specifically, the novel RL-based MFAC scheme is reasonable to design the controller without need to estimate y(k+1) information. Furthermore, based on Lyapunov stability analysis method, the closed-loop systems can be ensured uniformly ultimately bounded. Simulations are shown to validate the theoretical results.