Unsupervised Domain Adaptation (UDA) is an ideal transfer learning method, which can use labeled source data to improve the classification performance of unlabeled target data. At present, the UDA methods for Time Series Classification (TSC) only use time-domain data or frequency-domain data as the input, and ignore fusing them, resulting in insufficient feature extraction and inaccurate source-target distribution alignment. Therefore, we propose an unsupervised Multimodal Domain Adversarial Network (MDAN) for TSC tasks. Specifically, we adopt two feature extractors for the time-domain and frequency-domain feature representations, and employ-three classifiers to perform TSC of source data for training the two feature extractors; Then, we fuse the time-domain and frequency-domain feature representations of source and target data, respectively, input them into the unified domain discriminator for unsupervised multimodal domain adversarial learning, and combine the proposed Time-Frequency-domain Joint Maximum Mean Discrepancy (TF-JMMD) to accurately align the source-target distributions; Finally, we select CNN or ResNet18 as the feature extractors to carry out comprehensive experiments, and the results demonstrate the SOTA performance of MDAN.