Adversarial examples have posed a serious threat to deep neural networks due to their transferability. Existing transfer-based attacks tend to improve the transferability of adversarial examples by destroying intrinsic features. However, prior work typically employed single-dimensional or additive importance estimates, which provide inaccurate representations of features. In this work, we propose the Multi-Feature Attention Attack (MFAA), which fuses multiple layers of feature representations to disrupt category-related features and thus improve the transferability of the adversarial examples. First, MFAA introduces a layer-aggregation gradient (LAG) to obtain guidance maps, which reflect the importance of features in multiple scales. Second, it generates ensemble attention (EA), preserving object-specific features and offsetting model-specific features based on the guidance maps. Third, EA is iteratively disturbed to achieve high transferability of the adversarial examples. Empirical evaluation on the standard ImageNet dataset shows that adversarial examples crafted by MFAA can effectively attack different networks. Compared to the state-of-the-art transferable attacks, our attack improves the average attack success rate of the black-box model with defense from 88.5% to 94.1% on single-model attacks and from 86.6% to 95.1% on ensemble attacks. Our code is available at Github: https://github.com/KWPCCC/MFAA.