7.1.1 Overview Classifier Evaluation

Course subject(s) Module 07. Classifier Evaluation

In section 7, you will learn about classifier evaluation. Evaluating classifiers can be done using the accuracy or error rate, but what is a “good” accuracy? 90%? 99.9%? How can we judge this?  Furthermore, the accuracy is a single number, but often we are interested in more details for a thorough evaluation. This is done using a confusion matrix, from which we can determine the number of false and true positives and false and true negatives. We will explain this terminology and why we need it. Finally, many classifiers, before coming to a prediction, first output a numerical value called the score. The score is compared to a threshold to come to a prediction. Using the concept of the ROC curve, we discuss how to find the optimal threshold for our classifier.

After this section you can:

    • Can place the accuracy or error rate in context using class priors / dummy classifiers
    • Can compute an ROC curve given some data and their scores
    • Can use a confusion matrix to compute the error rate, accuracy or error rate (per class)
    • Can use an ROC curve to determine optimal operating points given costs
    • Can perform model selection using an ROC curve
Creative Commons License
AI skills for Engineers: Supervised Machine Learning by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/ai-skills-for-engineers-supervised-machine-learning/
Back to top