Bibliographic Details
| Title: |
Meta simulation approach for evaluating machine learning method selection in data limited settings. |
| Authors: |
Alwash, Mostafa1 (AUTHOR) malwash@gmail.com, Al Hajj, Ghadi S.1 (AUTHOR) ghadia@uio.no, Grytten, Ivar1 (AUTHOR) ivargry@ifi.uio.no, Sandve, Geir Kjetil1 (AUTHOR) geirksa@ifi.uio.no |
| Source: |
Scientific Reports. 11/19/2025, Vol. 15 Issue 1, p1-17. 17p. |
| Subject Terms: |
*MACHINE learning, *CAUSAL models, *CALIBRATION, *MEDICAL care, *BENCHMARK problems (Computer science), *COMPUTER simulation |
| Abstract: |
Selecting appropriate machine learning (ML) methods for domain-specific tasks remains a persistent challenge, particularly in medicine where datasets are often small, heterogeneous, and incomplete. Traditional benchmarking strategies rely on limited observational samples, which may not capture the complexity of the underlying data-generating process (DGP). As a result, methods that perform well on available data may generalise poorly in real-world practice. We present SimCalibration, a meta-simulation framework that leverages structural learners (SLs) to infer an approximated data-generating process from limited data and generate synthetic datasets for large-scale benchmarking. This framework enables systematic evaluation of machine learning method selection strategies in settings where the true data-generating process is either known or can be approximated, allowing both validation against the ground truth and the generation of synthetic observations inferred from sparse samples. In rare disease research for example, where patient cohorts are inherently small, causal relationships are often conceptualised as directed acyclic graphs (DAGs). In this work, such structures are approximated directly from observational data, extending the utility of small datasets by enabling investigators to benchmark ML methods in a controlled simulation setting before deploying them in practice. This reduces the risk of selecting models that generalise poorly and supports more reliable decision-making in sensitive healthcare contexts. Experiments demonstrate that (a) structural learners vary in their ability to recover representative simulations for benchmarking, (b) structural learner-based benchmarking reduces variance in performance estimates compared to traditional validation, and (c) in some cases, structural learner-based approaches yield rankings that more closely match true relative performance than those derived from limited datasets. These findings highlight the value of simulation-based benchmarking for domains where drawing generalisable conclusions is critical, such as medicine, and offer greater transparency into the assumptions underlying predictive decisions. [ABSTRACT FROM AUTHOR] |
| Database: |
Academic Search Index |