4.6.1 In-sample validation
Course subject(s)
Module 4. Data analysis using Excalibur and Validation
During Module 3, you have learned to evaluate the performance of the Decision Maker (DM) with the help of the two objective measures
- calibration score (or statistical accuracy) and
- information score
This evaluation is commonly referred to asĀ in-sample validation.
The term “in-sample” refers to the fact that the Decision Maker (DM) is evaluated on the same data that has been used to construct the DM!
But what would the DM perform with external data? That is, on questions which have not been used to construct the DM.
External data implies that experts would need to answer other questions, and that is often simply impossible.
The standard statistical approach is to split the available data into two sets: a training set and a test set.
The training set is used to construct the DM and the test set would work then as external data.
Decision Making Under Uncertainty: Introduction to Structured Expert Judgment by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/decision-making-under-uncertainty-introduction-to-structured-expert-judgment//.