Deep reinforcement learning (DRL) has emerged as a powerful technique for autonomous racing. However, current studies often overlook the utilization of valuable historical information by relying solely on dense layers to generate actions based on the current state. This paper presents a novel approach called Sequential Actor-Critic (Seq-AC) for autonomous racing, which leverages the historical trajectory to enhance learning efficiency. Besides dense layers, we employ Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks to construct the actor and critic networks within the Deep Deterministic Policy Gradients (DDPG) framework, complemented by the use of a continuous memory buffer. Through extensive simulations, we demonstrate that the proposed Seq-AC method surpasses regular DDPG in terms of convergence speed and training results. By incorporating historical information, our approach enables the agent to capture long-term dependencies and make informed decisions. Furthermore, we investigate the impact of time sequence lengths on algorithm performance, shedding light on the optimal choice for effective learning.