Sažetak

Subjective judgment, from expert and lay sources, is a part of all human knowledge. Surveys of behaviors, attitudes, and intentions are standard in political science, psychology and economics; subjective expert judgment controls environmental risk analysis, public policy, economic forecasts, scientific hypotheses, military doctrine, artistic and legal interpretations. Although essential for science and policy, subjective judgment is also problematic, because it is hard to know if experts’ judgments are truthful – stated without deception or self-deception – and whether what the experts are saying is true. Hence, there is both a problem of truthfulness and of truth.
I present a scoring method for collecting subjective data (forecasts, estimates) from experts, designed for situations where objective truth is intrinsically or practically unknowable. In this situation, the opinions of other experts provide the only index of judgmental quality. The method assigns high scores, not to the most common answers, but to answers that are ‘more common than collectively predicted,’ with predictions drawn from the same group that generates the answers. This simple adjustment in the scoring criterion removes any potential bias in favor of the ‘average opinion.’ Truthful answers maximize expected score even for an expert – a ‘Cassandra’ – who is sure that her judgment represents a minority view.

*Kolokvij Instituta Ruđer Bošković i Hrvatskog biofizičkog društva