Ecological cruising control methods of vehicles have been extensively studied to further cut down energy consumption by optimizing vehicles’ speed profiles. However, most controllers cannot be put into practical application because of future terrain data requirements and excessive computational demand. In this paper, an eco-cruising strategy with real-time capability utilizing deep reinforcement learning is proposed for electric vehicles (EVs) propelled by in-wheel motors. The deep deterministic policy gradient algorithm is leveraged to continuously regulate the motor torque in response to road elevation changes. By comparing the proposed strategy to the energy economy benchmark optimized with dynamic programming (DP), and traditional constant speed (CS) strategy, its learning ability, optimality, and generalization performance are verified. The simulation results show that without a priori knowledge about the future trip, the proposed strategy provides 3.8% energy saving compared with the CS strategy. It also yields a smaller gap than the globally optimal solution of DP. By testing on other driving cycles, the trained strategy reveals good generalization performance and impressive computational efficiency (about 2 ms per simulation step), making it practical and implementable. Additionally, the model-free characteristic of the proposed strategy makes it applicable for EVs with different powertrain topologies.