Effectively Communicating Effect Sizes


How do people form impressions of effect size when reading the results of scientific experiments? We present a series of studies about how people perceive treatment effectiveness when scientific results are summarized in various ways. We first show that a prevalent form of summarizing scientific results—presenting mean differences between conditions—can lead to significant overestimation of treatment effectiveness, and that including confidence intervals can, in some cases, exacerbate the problem. We next attempt to remedy these misperceptions by displaying information about variability in individual outcomes in different formats: explicit statements about variance, a quantitative measure of standardized effect size, and analogies that compare the treatment with more familiar effects (e.g., differences in height by age). We find that all of these formats substantially reduce initial misperceptions, and that effect size analogies can be as helpful as more precise quantitative statements of standardized effect size.