My career in journalism began roughly the same time as this pandemic did. “Boring” is a word that may spring to mind for many people when I try to explain what I’ve been doing since: If you are interested in calculating confidence intervals, methodologies, the importance of saying “I am really unsure”, and the difference between knowing what to expect for a group and what to expect for a person of that group, you are probably an outlier. But even if you are not, please read on anyway. What I’ve spent much of my time doing – statistical modelling – may not be worthy of a thriller in terms of the action it entails: mostly thinking, sometimes codin. But the journalism it has enabled me to create is, at least to me, as important and exciting as anything.
Short bio: I joined The Economist in February of 2020. Before that I was at Princeton University (PhD research, completed in December of 2020), and before that I did a BA at NYU.
Description of portfolio:
I am the author of all the linked projects. That means I did the writing, research, and modelling. Visuals (include interactives), were created – in my view masterfully – by our designers and visual data journalists based on the data I provided.
This submission has links from two big projects, both about the pandemic, and two small.
The first is a covid-19 risk estimator, with an associated article and methodology. The estimator is an online tool that enables people to fill in pre-existing conditions (e.g. asthma, diabetes and hyperlipidemia) and get the risk of hospitalization and death by age and gender should they be diagnosed with covid-19. Anytime someone uses the tool, a machine learning model running in the cloud is used to generate a prediction. This tool, and the data used to construct it, was also the basis of an investigation into covid-19 illness.
The estimator has been used by academic researchers at Brown, Harvard, New York and other universities, as well as the American Centers for Disease Control (CDC). Researchers at NYU Langone are currently, together with me, exploring how to use the tool to optimize allocation of booster doses of vaccine. These researchers turned to us, because, as far they knew, no better tool was available.
The second is my estimate of covid-19’s true death toll for every country and territory. This includes an article, a methodology, an online interactive (that updates daily), and an article centered on South Asia (with reporting). This project aimed to estimate and show a more accurate representation of the pandemic’s true death toll. The effort has been widely praised by international organizations, receiving a thank-you from the WHO, who called it ‘heroic’, a letter from the UN praising it as “exemplary”, and used as the basis for analysis by the World Bank, as well as acknowledgement from the Global Fund, one of the largest distributors of pandemic aid. Researchers at the University of Oxford call it: “the most comprehensive and rigorous attempt to understand how mortality has changed during the pandemic at the global level.”
The project’s estimates have also been widely used, and even become part of the covid-19 “data” tracked by Our World In Data. I also know that at least one of the world’s largest aid organizations have used them in discussions of how to allocate pandemic aid. They are also being used by the World Health Organization. Upon request, I’m also regularly advising the WHO on related questions, joining their working group on the topic. I think it has been important to show the death toll in developing countries: using official numbers of confirmed cases and deaths, both dependent on testing, means underestimating the severity of the pandemic in precisely the places with the least resources to fight it. This enables people to do better.
This project has a link to a github repository as well: both projects are 100% open source.
Finally, there are two smaller projects to show breadth. The first, “the beef with beef”, looks at climate change, and how by cooking so many cows, humans are cooking themselves, too. The second, “coming clean”, looks at how Twitter’s algorithm favors politicians on the right, and unreliable media. That analysis expanded on an academic paper by Twitter’s own researchers, who in turn had used a method I pioneered as a journalist in 2020.