The aggregation of expert judgment: do good things come to those who weight?

Risk Anal. 2015 Jan;35(1):5-11. doi: 10.1111/risa.12272. Epub 2014 Aug 25.

Abstract

Good policy making should be based on available scientific knowledge. Sometimes this knowledge is well established through research, but often scientists must simply express their judgment, and this is particularly so in risk scenarios that are characterized by high levels of uncertainty. Usually in such cases, the opinions of several experts will be sought in order to pool knowledge and reduce error, raising the question of whether individual expert judgments should be given different weights. We argue--against the commonly advocated "classical method"--that no significant benefits are likely to accrue from unequal weighting in mathematical aggregation. Our argument hinges on the difficulty of constructing reliable and valid measures of substantive expertise upon which to base weights. Practical problems associated with attempts to evaluate experts are also addressed. While our discussion focuses on one specific weighting scheme that is currently gaining in popularity for expert knowledge elicitation, our general thesis applies to externally imposed unequal weighting schemes more generally.

Keywords: Aggregation; calibration; expert judgment; knowledge elicitation; risk assessment.