Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed perturbations to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.