The Dunning-Kruger effect probably is real

Figure 1 from Kruger & Dunning (1999).
Figure from this blog post, generated by Patrick McKnight.
A noise + bias model of a participant p where subjective assessment of ability is a noisy measurement of true ability, x. The objective measure of ability, o, is a noisy measurement of true ability, and the subjective/self-estimate of ability, s, is a noisy estimate of the true ability plus bias.
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
def generate_data(N=1_000_000, bias=0, σo=1, σs=1):
# true ability
x = norm.rvs(size=N)
# objective measure of ability
o = x + norm.rvs(loc=0, scale=σo, size=N)
# subjective measure of ability
s = x + bias + norm.rvs(loc=0, scale=σs, size=N)
# group participants into quartiles based on objective measure
q = np.digitize(o, np.percentile(o, [25, 50, 75])) + 1
return (x, o, s, q)
def plot_subjective_ability(o, s, q, ax):
# Calculate means for each quartile
s_mean = [np.mean(s[q == group]) for group in [1, 2, 3, 4]]
o_mean = [np.mean(o[q == group]) for group in [1, 2, 3, 4]]
# Convert to percentiles, based on the observed score
s_mean = norm.cdf(s_mean, loc=0, scale=np.std(o_mean)) * 100
ax.plot([1, 2, 3, 4], s_mean, "o-", lw=6, ms=12,
label="subjective ability")
def format_quartile_plot(ax=None):
ax.plot([1, 4], [12.5, 87.5], "k-", label="identity line")
xlabel="Quartile of observed performance",
ylabel="Percentile estimate",
xticks=[1, 2, 3, 4],
yticks=np.linspace(0, 100, 11),
ylim=[0, 100],
fig, ax = plt.subplots(figsize=(6, 6))
x, o, s, q = generate_data(bias=0, σo=2, σs=2)
plot_subjective_ability(o, s, q, ax)
A random model with measurement error, and no estimation bias. This model is able to capture the basic Dunning-Kruger effect in that people of low ability seemingly overestimate their performance and people of high ability seemingly underestimate their performance.
fig, ax = plt.subplots(figsize=(6, 6))# Define parameters, each tuple is (bias, σo, σs)
parameter_set = [(0, 2, 2), (+1, 2, 2), (-1, 2, 2)]
for θ in parameter_set:
bias, σo, σs = θ
x, o, s, q = generate_data(bias=bias, σo=σo, σs=σs)
plot_subjective_ability(o, s, q, ax)
Blue line corresponds to a medium difficulty with no bias. The orange line corresponds to an easy task where participants overestimate their ability (bias=+1 std). The green line corresponds to a hard task where participants underestimate their ability (bias=-1 std). This replicates the simulations, and empirical results, from Burson et al (2006).
  • A noise only model, with no bias, is capable of generating systematic over- and under-estimation of one’s abilities even though there is no systematic bias present in the model.
  • This alone may make one believe that the Dunning-Kruger effect is artifactual, a result of measurement error alone. However this is incorrect.
  • The empirical observations are better accounted for by a noise + bias model as I presented above. But serious readers should check out Burson et al (2006) for a much more in-depth treatment of this topic.
  • Take home message — The Dunning-Krugger effect probably is real, but you probably want to base your interpretations of psychological explanations upon the noise + bias model.




Lecturer at University of Dundee, Scotland, UK

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Is logistic regression a good analysis tool for differential gene expression? Only PCR will tell.

Urban Patterns of Police Misconduct

Four-square image with statistical r squared notation in upper right square and detective with magnifying glass in lower left square.

Understanding the basics of data visualization with python

Getting AdWords KPIs Reports via API & Python — Step by Step Guide

5 Steps to Implementing a Data Literacy-Driven DataOps Framework

A girl’s guide to surviving Bay Area data science interviews

Location, Location, Location

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Benjamin Vincent

Benjamin Vincent

Lecturer at University of Dundee, Scotland, UK

More from Medium

An engineering perspective on physics based machine learning

AI Transformation Series #1: New Organizational Structures for Training Data in 2022

Conversion from struct type to array

Deep Learning: A Primer on Distributed Training — Part 1