Elicitation and evidence

An important part in the SHELF process is the use of an “evidence dossier”: a summary of quantitative and qualitive evidence related to the quantity of interest. The evidence dossier is sent to the experts before the elicitation workshop, and is available to the experts during the workshop itself. Typically, data presented in such a dossier would not be ‘directly’ related to the quantity of interest, in the sense that we could write down a likelihood function (otherwise, we may choose not to use expert judgement in the first place.) There is a sample evidence dossier in the SHELF package.

The evidence dossier can provide structure to the (facilitated) discussion between the experts, in that they can debate the quality and relevance of each item of evidence. It does, however, beg the question that I think is at the core of the elicitation problem:

How do we decide on ‘appropriate’ probability values, given the knowledge and evidence available to us?

Of course, defining ‘appropriate’ is difficult, and no-one is going to come up with mechanistic rules for what is the ‘correct’ distribution given a particular evidence dossier. But it is not hard to imagine inappropriate distributions, given particular evidence. For example, suppose there were to be a second referendum on the UK leaving the European Union. In the first referendum, the result was leave 52%, remain 48%. Now suppose an individual has been following the developments in the news, monitoring opinion polls, knows the first result, yet declares a uniform distribution over the interval 0%-100% for the percentage of leave voters in a second referendum. I don’t think anyone could justify why an individual would be so uncertain, given the individual’s knowledge.

One interesting paper is ‘Expert information’ versus ‘expert opinions’. Another approach to the problem of eliciting/ combining/using expert knowledge in PRA by Stan Kaplan. He takes the view that it is the role of the experts to assess the evidence, and it is for a (suitably trained) facilitator to make the probability judgements, given the experts assessments. I like his emphasis on getting the experts to debate each piece of evidence, although personally, as a facilitator, I would find it too hard to propose probabilities myself (I couldn’t have done so in his first example).

I also like Bias modelling in evidence synthesis by Turner et al. They consider the problem of meta-analysis, where each item of ‘evidence’ is a published study, and experts assess each study for “relevance” (was the study measuring the quantity we are interested in) and “rigour” (was the study well designed; was it likely to produce a biased estimate of the quantity the study investigators were interested in). Structure is given to the process by considering what an ‘ideal trial’ would be: a well-designed (bias-free) trial, constructed to estimate the target quantity of interest.

In general, though, I think elicitation methodology tends to focus on how to construct a probability distribution from a set of probability judgements, and not on how one might make the judgements in the first place. Papers that report results of elicitation exercises tend not to say too much in terms of the rationale behind the judgements (and I’m no better at this than anyone else.) I think it would be interesting to take some elicitation problems, and to see how far we can really go in justifying the probability judgements that are made; perhaps in time some themes will emerge. I may try to do this in some later posts…

Related