The twin delayed deep deterministic policy gradient (TD3) algorithm and genetic (G) algorithm can take significant time to converge. Hence, it would be interesting to propose an alternative algorithm for fast gains learning in a high-gain controller, being reflected as fast trajectory tracking. In a differential evolution (DE) algorithm, the population is installed, and the mutation, crossover, and selection operations are repeated until the convergence is located. In this way, compared with the TD3 and G algorithms, a DE algorithm can converge faster. In this article, the fast gains learning in a DE high-gain controller (DEHGC) is proposed. The DEHGC contains a high-gain controller for trajectory tracking and a DE algorithm for fast gains learning. The error stability of the high-gain controller is assured. The pseudocode of the DEHGC is detailed. The DE, TD3, and G algorithms are compared for fast gains learning in the high-gain controller.