Preventing the Forecaster's Evaluation Dilemma
Authors: Malte C. Tichy
Abstract: Assume that a grocery item is sold 1'234 times on a given day. What should an ideal forecast have predicted for such a well-selling item, on average? More generally, when considering a given outcome value, should the empirical average of forecasted expectation values for that outcome ideally match it? Many people will intuitively answer the first question with "1'234, of course", and affirm the second. Perhaps surprisingly, such grouping of data by outcome induces a bias in the evaluation. An evaluation procedure that is aimed at verifying the absence of bias across velocities, when based on such segregation by outcome, therefore fools forecast evaluators and incentivizes forecasters to produce overly exaggerated (extreme) forecasts. Such anticipatory adjustments jeopardize forecast calibration and clearly worsen the forecast quality - this problem was named the "Forecaster's Dilemma" by Lerch et al. in 2017 (Statististical Science 32, 106). As a solution to check for bias across velocities, forecast evaluators should group pairs of forecasts and outcomes by the predicted values, and evaluate empirical mean outcomes per prediction bucket. Within a simple mathematical treatment for the number of items sold in a supermarket, the reader is walked through the dilemma and its circumvention.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.