The path planning technology is an important part of navigation, which is the core of robotics research. Reinforcement learning is a fashionable algorithm that learns from experience by mimicking the process of human learning skills. When learning new skills, the comprehensive and diverse experience help to refine the grasp of new skills which are called as the depth and the breadth of experience. According to the path planning, this paper proposes an improved learning policy based on the different demand of the experience's depth and breadth in different learning stages, where the deep Q-networks calculated Q-value adopts the dense network framework. In the initial stage of learning, an experience value evaluation network is created to increase the proportion of deep experience to understand the environmental rules more quickly. When the path wandering phenomenon happens, the exploration of wandering point and other points are taken into account to improve the breadth of the experience pool by using parallel exploration structure. In addition, the network structure is improved by referring to the dense connection method, so the learning and expressive abilities of the network are improved to some extent. Finally, the experimental results show that our model has a certain improvement in convergence speed, planning success rate, and path accuracy. Under the same experimental conditions, the method of this paper is compared with the conventional intensive learning method via deep Q-networks. The results show that the indicators of this method are significantly higher.