4.6.2 Out-of-sample validation

Course subject(s) Module 4. Data analysis using Excalibur and Validation

We then construct the DM using a training set of calibration questions and evaluate the performance of the DM on the test set of calibration questions.

This technique is called out-of-sample validation or cross validation.

The subsequent question is then: how large should the training and the test sets be?

Typically, 80% of the calibration questions are used for training (to construct the DM) and 20% are used for testing, that is for the evaluation of DMs’ performance.

Creative Commons License
Decision Making Under Uncertainty: Introduction to Structured Expert Judgment by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/decision-making-under-uncertainty-introduction-to-structured-expert-judgment//.
Back to top