MLcps: machine learning cumulative performance score for classification problems.

Akshay, Akshay; Abedi, Masoud; Shekarchizadeh, Navid; Burkhard, Fiona C; Katoch, Mitali; Bigger-Allen, Alex; Adam, Rosalyn M; Monastyrskaya, Katia; Hashemi Gheinani, Ali (2022). MLcps: machine learning cumulative performance score for classification problems. GigaScience, 12 Oxford University Press 10.1093/gigascience/giad108

[img]
Preview
Text
giad108.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (1MB) | Preview

BACKGROUND

Assessing the performance of machine learning (ML) models requires careful consideration of the evaluation metrics used. It is often necessary to utilize multiple metrics to gain a comprehensive understanding of a trained model's performance, as each metric focuses on a specific aspect. However, comparing the scores of these individual metrics for each model to determine the best-performing model can be time-consuming and susceptible to subjective user preferences, potentially introducing bias.

RESULTS

We propose the Machine Learning Cumulative Performance Score (MLcps), a novel evaluation metric for classification problems. MLcps integrates several precomputed evaluation metrics into a unified score, enabling a comprehensive assessment of the trained model's strengths and weaknesses. We tested MLcps on 4 publicly available datasets, and the results demonstrate that MLcps provides a holistic evaluation of the model's robustness, ensuring a thorough understanding of its overall performance.

CONCLUSIONS

By utilizing MLcps, researchers and practitioners no longer need to individually examine and compare multiple metrics to identify the best-performing models. Instead, they can rely on a single MLcps value to assess the overall performance of their ML models. This streamlined evaluation process saves valuable time and effort, enhancing the efficiency of model evaluation. MLcps is available as a Python package at https://pypi.org/project/MLcps/.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Pre-clinic Human Medicine > BioMedical Research (DBMR) > DBMR Forschung Mu35 > Forschungsgruppe Urologie
04 Faculty of Medicine > Pre-clinic Human Medicine > BioMedical Research (DBMR) > DBMR Forschung Mu35 > Forschungsgruppe Urologie

04 Faculty of Medicine > Pre-clinic Human Medicine > BioMedical Research (DBMR)
04 Faculty of Medicine > Department of Dermatology, Urology, Rheumatology, Nephrology, Osteoporosis (DURN) > Clinic of Urology

Graduate School:

Graduate School for Cellular and Biomedical Sciences (GCB)

UniBE Contributor:

Akshay, Akshay, Burkhard, Fiona Christine, Monastyrskaya-Stäuber, Katia, Hashemi Gheinani, Ali

Subjects:

600 Technology > 610 Medicine & health
600 Technology > 630 Agriculture

ISSN:

2047-217X

Publisher:

Oxford University Press

Language:

English

Submitter:

Pubmed Import

Date Deposited:

14 Dec 2023 14:29

Last Modified:

15 Dec 2023 06:01

Publisher DOI:

10.1093/gigascience/giad108

PubMed ID:

38091508

Uncontrolled Keywords:

Python package classification problems machine learning model evaluation unified evaluation score

BORIS DOI:

10.48350/190326

URI:

https://boris.unibe.ch/id/eprint/190326

Actions (login required)

Edit item Edit item
Provide Feedback