We study reinforcement learning for continuous-time Markov decision processes (MDPs) in the finite-horizon episodic setting. In contrast to discrete-time MDPs, the intertransition times of a continuous-time MDP are exponentially distributed with rate parameters depending on the state–action pair at each transition. We present a learning algorithm based on the methods of value iteration and upper confidence bound. We derive an upper bound on the worst case expected regret for the proposed algorithm and establish a worst case lower bound with both bounds of the order of square root on the number of episodes. Finally, we conduct simulation experiments to illustrate the performance of our algorithm. Funding: X. Gao is supported by the Hong Kong Research Grant Council [Grants 14201421, 14212522, 14200123]. X. Zhou gratefully acknowledges financial support through the Nie Center for Intelligent Asset Management at Columbia.