In this paper, we propose a timestamp knowledge distillation (TKD) method that adopts privileged knowledge distillation to enhance the performance of deep neural network (DNN)-based target sound extraction (TSE). While previous studies have mainly used n-hot vectors to indicate the type of target sound events (SEs), which are termed weak labels (WLs), recent studies demonstrated that timestamp knowledge of SEs is meaningful information to improve the TSE performance. To utilize timestamp knowledge, we use the oracle strong labels (OSLs) that indicate the occurrence of target SEs in the audio clip as privileged information. However, the OSLs are difficult to gain in real-world applications compared to WLs. We thus propose the TKD that transfers the timestamp knowledge from the teacher model trained using both WLs and OSLs to the student model trained using only WLs via a loss function. Experimental results across multiple DNN architectures confirmed that the OSLs enhanced the TSE significantly. Moreover, the TKD notably improved the student model's performance compared to the baseline trained only with WLs.