TY - JOUR
T1 - A Comprehensive Study on the Interplay between Dataset Characteristics and Oversampling Methods
AU - Yang, Yue
AU - Fang, Tangtangfang
AU - Hu, Jinyang
AU - Goh, Chang Chuan
AU - Zhang, Honghao
AU - Cai, Yongmei
AU - Bellotti, Anthony Graham
AU - Lee, Boon Giin
AU - Ming, Zhong
N1 - Publisher Copyright:
© 2025 University of Nottingham Ningbo China.
PY - 2025
Y1 - 2025
N2 - Addressing class imbalance in oversampling domain using machine learning methods requires careful selection of techniques and classifiers for optimal outcomes. While the importance of technique choice is well recognized, research on how dataset characteristics affect classification results remained limited. This study fills this gap by analyzing 16 datasets, categorized by financial relevance, temporal relevance, minority rate, minority sample count, and feature count. The effectiveness of various oversampling techniques is systematically evaluated and ranked using F1 and AUC, providing a structured framework for assessing the suitability of these techniques across diverse datasets. The evaluation involved 15 classifiers, resulting in 75 models combining four oversampling techniques and a baseline classifier. A ranking mechanism identified five top-performing models, emphasizing that classifier performance is influenced by the choice of the oversampling method, depending on dataset type. Notably, the traditional Synthetic Minority Oversampling Technique (SMOTE) outperformed the approaches based on Generative Adversarial Network (GAN) across different classifiers and datasets. Among classifiers, random forest proved to be the most robust across all dataset types, surpassing boosting-based classifiers. Overall, this study provides valuable insights into selecting the optimal oversampling methods and classifiers for specific dataset characteristics, offering a framework for addressing class imbalance in various contexts.
AB - Addressing class imbalance in oversampling domain using machine learning methods requires careful selection of techniques and classifiers for optimal outcomes. While the importance of technique choice is well recognized, research on how dataset characteristics affect classification results remained limited. This study fills this gap by analyzing 16 datasets, categorized by financial relevance, temporal relevance, minority rate, minority sample count, and feature count. The effectiveness of various oversampling techniques is systematically evaluated and ranked using F1 and AUC, providing a structured framework for assessing the suitability of these techniques across diverse datasets. The evaluation involved 15 classifiers, resulting in 75 models combining four oversampling techniques and a baseline classifier. A ranking mechanism identified five top-performing models, emphasizing that classifier performance is influenced by the choice of the oversampling method, depending on dataset type. Notably, the traditional Synthetic Minority Oversampling Technique (SMOTE) outperformed the approaches based on Generative Adversarial Network (GAN) across different classifiers and datasets. Among classifiers, random forest proved to be the most robust across all dataset types, surpassing boosting-based classifiers. Overall, this study provides valuable insights into selecting the optimal oversampling methods and classifiers for specific dataset characteristics, offering a framework for addressing class imbalance in various contexts.
KW - Class imbalance
KW - data characteristics
KW - machine learning
KW - oversampling
UR - http://www.scopus.com/inward/record.url?scp=85215119515&partnerID=8YFLogxK
U2 - 10.1080/01605682.2025.2450060
DO - 10.1080/01605682.2025.2450060
M3 - Article
AN - SCOPUS:85215119515
SN - 0160-5682
JO - Journal of the Operational Research Society
JF - Journal of the Operational Research Society
ER -