3.8.4 Combined score of the Decision Maker (2)

Course subject(s) Module 3. Performance-based weights and the Decision Maker

So, what we saw with the last exercise was that removing an expert which has a low calibration score might actually decrease the performance of the PWDM.

This is, of course, not always the case!

But can we find a subset of experts for which the combined score of the Decision Maker is the highest possible?

What are the possible subsets?
– all three experts
– two experts and
– one expert

The selection of removing experts is done using the calibration score criterion: the expert with the lowest calibration score will be chosen to be removed.

Well, let’s see that for our example!

The options are then
– Expert A, Expert B and Expert C ==>PWDM3 (PWDM computed using all 3 experts)
– Expert B and Expert C ==> PWDM2 (PWDM computed using 2 experts)
– Expert B (since it’s the highest calibrated expert!) ==> PWDM1 (PWDM computed using 1 expert)

Recall the calibration and the information scores of the PWDM and EWDM when taking into account Expert A and without Expert A assessments (PWDM_updated).

Calibration score Information score
PWDM 0.608 0.52
EWDM 0.411 0.12
PWDM_updated (without Expert A) 0.608 0.22
EWDM_updated (without Expert A) 0.411 0.17

In considering PWDM1, we look for the expert with the highest calibration score in the initial pool of experts. For this, consider all experts’ calibration scores along with their information scores.

Calibration score Information score
Expert A 3.621E-005 1.418
Expert B 0.608 0.558
Expert C 0.327 1.354

Which one has the highest combined score?

Creative Commons License
Decision Making Under Uncertainty: Introduction to Structured Expert Judgment by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/decision-making-under-uncertainty-introduction-to-structured-expert-judgment//.
Back to top