Hyper-parameter Tuning of Machine Learning Models

To improve the performance of a machine learning model, one of the aspects that Data Scientists focus on is, tuning and fine-tuning hyper-parameters of Machine Learning (ML) models, besides working on Feature Handling and Model Ensemble. Parameter tuning plays a vital role in achieving higher accuracy of an ML model.

In this post, we will apply XGBoost algorithm on Breast Cancer Dataset published at UCI Machine Learning Repository, tune various parameters, and see how it improves the performance of an ML model.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Click here to view the above PDF document

The initial accuracy of XGBoost model, from the above PDF document, is 73.26% with random parameters. After tuning 6 different parameters, the accuracy increased by 1.16% to 74.42%. Though the increase in accuracy is marginal due to the very small dataset, this document explains how one can tune hyper-parameters using GridSearchCV and improve the performance.