Skip to content

Latest commit

 

History

History
29 lines (24 loc) · 1.25 KB

README.md

File metadata and controls

29 lines (24 loc) · 1.25 KB

Chapter 6: Learning Best Practices for Model Evaluation and Hyperparameter Tuning

Chapter Outline

  • Streamlining workflows with pipelines
    • Loading the Breast Cancer Wisconsin dataset
    • Combining transformers and estimators in a pipeline
  • Using k-fold cross-validation to assess model performance
    • The holdout method
    • K-fold cross-validation
  • Debugging algorithms with learning and validation curves
    • Diagnosing bias and variance problems with learning curves
    • Addressing over- and underfitting with validation curves
  • Fine-tuning machine learning models via grid search
    • Tuning hyperparameters via grid search
    • Exploring hyperparameter configurations more widely with randomized search
    • More resource-efficient hyperparameter search with successive halving
    • Algorithm selection with nested cross-validation
  • Looking at different performance evaluation metrics
    • Reading a confusion matrix
    • Optimizing the precision and recall of a classification model
    • Plotting a receiver operating characteristic
    • Scoring metrics for multiclass classification
  • Dealing with class imbalance
  • Summary

Please refer to the README.md file in ../ch01 for more information about running the code examples.