Loading Events

« All Events

  • This event has passed.

Klaus Nehring (UC Davis) “Aggregating Experts, Aggregating Sources: The Diversity Value”

22 May 2019, 4:30 pm6:00 pm

Event Navigation

Abstract: A decision maker (DM) needs to come up with a probability judgement over a set of events based on the judgments of a set of information sources such as experts. How?

There are two basic approaches. These are often referred to as (Supra-)Bayesian vs. “mechanical” or “axiomatic”. More appropriate terms and conceptualizations for these distinctions may be “belief revision” vs. “belief (prior) construction”, or “maximalist” vs. “minimalist”.

The Bayesian approach assumes that the DM already has a prior over the joint distribution of facts (states of nature) and expert beliefs. This involves substantial prior information to ground assumptions on the prior, and in this sense a fair amount of expertise on part of the DM himself. If the DM can claim (justified) confidence in possessing such information, the Bayesian approach is justified as a special instance of Bayesian rationality. However, such claims may rest on thin grounds; in particular, the task of estimating the relevant joint distribution from the past joint track record of expert judgements is often fraught with difficulty, especially since there may be a significant risk of overfitting since expert judgments are often highly correlated.

So, there will be many situations when a Bayesian approach is not workable or not viewed as sufficiently reliable. To deal with such situations, we propose a constructivist approach, which assumes the least input from the DM possible. Its minimalist stance comes in two parts. Given a set of expert weights, probabilities are aggregated by some form of weighted averaging, say linear or logarithmic opinion pooling. Such schemes can be supported both on axiomatic grounds and have also proven to be strikingly successful in practice, typically using equal weights. But equal weights have significant limitations of their own, that may limit or even hurt the estimation performance. First, some experts may have something to add, but little compared to others. So adding weak experts to the pool with equal weights may increase the reliance on noisy signals and thus, on balance, diminish performance. Second, some experts may be strong by themselves, but largely duplicate the expertise of others. Thus, adding strong yet similar experts to the pool with equal weights may lead to an overreliance on certain signals compared to others, again impairing performance.

Thus, to realize the potential gains from a diverse set of experts without dilution, one needs to allow for unequal weights reflecting the differential quality and/or similarity of experts. This is where some subjective input by the DM is indispensable. To allow this input to capture judgments/information of differential quality and similarity, we require the DM to specify a “reliability function” that maps sets of experts to positive real numbers measuring their “reliability”. Heuristically, “reliability” can be thought of as “expected precision”, and the reliability of a subset of experts measures the expected precision that can be obtained by aggregating their judgments (using optimal weights). Mathematically, reliability functions are non-additive set functions that are assumed to have the properties of diversity functions in the sense of Nehring and Puppe (2002, A Theory of Diversity, Econometrica). That is, they are monotone and totally submodular; the latter means that a given expert adds (weakly) less potential precision the more experts, especially: the more similar experts, there already are in the pool.

The core task of this paper is to determine the optimal weights to be assigned on the basis of their characterization in terms of a reliability function. For this purpose, we propose and axiomatize a weighting rule called the “Diversity Value”. The Diversity Value is given by a logarithmic scoring criterion and can be viewed as minimizing a generalized relative entropy. Heuristically, the Diversity Value selects those weights that best reflect the distinct marginal contributions of each expert to the overall reliability of the available set.

We also show that the Diversity Value can be characterized as a weighted Shapley value in which the source weights are determined endogenously as a fixed point. In addition, we show a number of other properties of interest. Notably, the Diversity value obeys the desideratum that larger weights should be assigned to more distinct sources (the “Similarity Principle”).

In the present paper, the characterization of experts by reliability functions is taken a primitive. There are different ways how one might conceptualize and structure the reliability assessment. This defines a rich set of questions for future theoretical work and practical application.

Details

Date:
22 May 2019
Time:
4:30 pm – 6:00 pm
Event Category:

Organiser

CPNSS

Venue

LAK 2.06
Lakatos Building
London, WC2A 2AE United Kingdom
+ Google Map
Website:
http://www.lse.ac.uk/