Vehicular edge computing is a new paradigm that overcomes the constraints of physical distances of cloud servers on system performance in the Internet of Vehicles (IoV). In order to increase resource utilization and system scalability, we present in this paper a joint communication and computational resource allocation mechanism in VEC-enhanced IoV,where automobiles and VEC servers simultaneously work as computing service nodes. Given that the offloading and resource allocation strategy depends on the changing environment state, we formulate the problem as a Markov decision process to minimize the overall system overhead. We propose dynamically adapting distributed and centralized Deep Reinforcement Learning (DRL) in response to the various task requirements and the availability of free computing resources in the service nodes. Finally, simulation experiments are conducted to compare the performance differences among the centralized, multi-agent, and proposed algorithms. Numerical results verify that our proposed scheme outperforms the baselines.