Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Dec 11;10:36.
doi: 10.1186/s13040-017-0154-4. eCollection 2017.

PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison

Affiliations
Free PMC article

PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison

Randal S Olson et al. BioData Min. .
Free PMC article

Abstract

Background: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists.

Results: The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered.

Conclusions: This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

Keywords: Benchmarking; Data repository; Machine learning; Model evaluation.

Conflict of interest statement

Not applicable. All data used in this study was publicly available online and does not contain private information about any particular individual.Not applicableThe authors declare that they have no competing interests.Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Figures

Fig. 1
Fig. 1
Histograms showing the distribution of meta-feature values from the PMLB datasets. Note the log scale of the y axes
Fig. 2
Fig. 2
Clustered meta-features of datasets in the PMLB projected onto the first two principal component axes (PCA 1 and PCA 2)
Fig. 3
Fig. 3
Mean values of each meta-feature within PMLB dataset clusters identified in Fig. 2
Fig. 4
Fig. 4
a Biclustering of the 13 ML models and 165 datasets according to the balanced accuracy of the models using their best parameter settings. b Deviation from the mean balanced accuracy across all 13 ML models. Highlights datasets upon which all ML methods performed similarly versus those where certain ML methods performed better or worse than others. c Identifies the boundaries of the 40 contiguous biclusters identified based on the 4 ML-wise clusters by the 10 data-wise clusters
Fig. 5
Fig. 5
Accuracy of the tuned ML models on each dataset across the PMLB suite of problems, sorted by the maximum balanced accuracy obtained for that dataset

Similar articles

See all similar articles

Cited by 11 articles

See all "Cited by" articles

References

    1. Hastie TJ, Tibshirani RJ, Friedman JH. The elements of statistical learning: data mining, inference, and prediction. New York: Springer; 2009.
    1. Caruana R, Niculescu-Mizil A. Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh: ACM; 2006. An empirical comparison of supervised learning algorithms.
    1. Urbanowicz RJ, Kiralis J, Sinnott-Armstrong NA, Heberling T, Fisher JM, Moore JH. Gametes: a fast, direct algorithm for generating pure, strict, epistatic models with random architectures. BioData Min. 2012;5(1):16. doi: 10.1186/1756-0381-5-16. - DOI - PMC - PubMed
    1. Urbanowicz RJ, Kiralis J, Fisher JM, Moore JH. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection. BioData Min. 2012;5(1):15. doi: 10.1186/1756-0381-5-15. - DOI - PMC - PubMed
    1. Blum A, Kalai A, Wasserman H. Noise-tolerant Learning, the Parity Problem, and the Statistical Query Model. J ACM. 2003;50:506–19. doi: 10.1145/792538.792543. - DOI

LinkOut - more resources

Feedback