Same data, same question, different results: New research in Journal of Finance documents non-standard errors

When researchers analyze data for hypothesis testing, they make many small decisions on how to handle data and how to run analysis. Variation in these small decisions adds up to considerable differences across findings


In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable – smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants. LINK

The page was last edited by: Department of International Economics, Government and Business // 04/25/2024