Music generation with artificial intelligence is a complex and captivating task. The utilisation of generative adversarial networks (GANs) has exhibited promising outcomes in producing realistic and diverse music compositions. In this paper, we propose a model based on Wasserstein GAN with gradient penalty (WGAN-GP) for multi-track music generation. This model incorporates self-attention and introduces a novel cross-attention mechanism in the generator to enhance its expressive capability. Additionally, we transpose all music to C major in training to ensure data consistency and quality. Experimental results demonstrate that our model can produce multi-track music with enhanced rhythm and sound characteristics, accelerate convergence, and improve generation quality.