6.1.1 Overview Cross Validation & Regularization
Course subject(s)
Module 06. Cross Validation & Regularization
In section 6, we will cover cross validation and regularization techniques for linear regression models. Cross validation is necessary for tuning and estimating the performance of machine learning models. We discuss the different variants and their trade-offs. Regularization techniques make models simpler in order to avoid overfitting. We discuss two techniques, ridge regression and Lasso regression, and we explain their differences.
After this section you can:
-
- Explain the pros and cons of the holdout, CV and LOOCV
- Explain when validation data or nested cross validation is necessary and why
- Explain what regularization is
- Recognize when it’s useful to apply regularization
- Explain the difference between Lasso and Ridge regularization
AI skills for Engineers: Supervised Machine Learning by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/ai-skills-for-engineers-supervised-machine-learning/