I investigated the possible causes of the 2020 polling error in the US elections and argued that (in addition to 2020-specific errors) the 2020 and 2016 errors shared a common cause. This cause is most likely a form of response bias. Since response rates to phone polls are continuing to plummet, I argued that this is only the beginning of a polling crisis.
The article’s clear and simple language allowed it to reach an audience who didn’t necessarily know anything about polling to begin with. It informed the public of a problem that is set to get worse, and which affects politicians’ understanding of the populace. It also provided a moderating influence on media narratives which claim that polling is already broken, arguing instead that while polling’s accuracy is about average by historical standards, its future prospects are bleak.
I calculated the correlation coefficient between the 2016 and 2020 polling errors and did an extensive literature review combined with interviewing.
What was the hardest part of this project?
Explaining the topic in simple language. Communicating the importance of the problem while straying from catastrophizing. Making a strong claim (that polling is in big trouble) while being clear about the uncertainty involved. (My treatment of Ann Selzer’s polls is a notable example of this striving for epistemic humility.) The jury should know that, despite the heavy media coverage of polling error, my article added value by shifting the focus of attention onto the correlation between 2016 and 2020 errors.
What can others learn from this project?
That heavily covered topics are worth pursuing if they are approached from a place of genuine curiosity. That it’s possible to express uncertainty and pieces of countervailing evidence without diluting one’s writing.