Imbalanced data significantly impacts the efficacy of machine learning models. In cases where one class greatly outweighs the other in terms of sample count, models might develop a bias towards the majority class, thereby undermining the performance of the minority class. Imbalanced data act to increase the risk of overfitting, as the model may memorize the majority of class samples instead of learning underlying patterns. This paper addresses these challenges in the classification field by exploring various solutions, including under-sampling, oversampling, SMOTE, cost-sensitive learning, and ensemble deep learning methods. The evaluate the performance of these methods on different datasets and provide insights into their strengths and limitations. The paper presents a taxonomy of strategies for imbalanced binary and multi-class classification problems, including resampling, algorithmic, and hybrid methods. Ultimately, the paper furnishes guidelines to facilitate the selection of the most pertinent method for mitigating imbalanced data challenges within a specific classification context.