2020 Shortlist
G. Elliott Morris
Category: Young journalist
Country/area: United States
Organisation: The Economist
Organisation size: Big

Cover letter:
My name is G. Elliott Morris and I am a 23-year old data journalist for The Economist. Combining data with statistical analysis, I write stories for our newspaper that illuminate topics ranging from politics and economics to religion and mountaineering to climate change and technology. These pieces are published both in print, in our Graphic Detail section, and online, where our visualizers bring the full force of interactive design to our work.
A Texan who studied history, political science and consumer science in college, most of my work answers interesting questions American politics. I have written about how the Senate protects Republican politicians—such as defending Donald Trump against the US House’s impeachment inquiry—and about how Americans who aren’t politically active are more likely to be Democratic, under-representing liberals in our government. I crunch the numbers on The Economist’s raw polling data from YouGov, lending insights about a wide range of issues regarding US public opinion. I also write about global topics, such as why China’s greenhouse gas emissions are better than expected for a country its size and why rentier economies in the Middle East have produced authoritarian government.
Almost all of my work is buttressed by sophisticated tools of modern computational social science. For example, I have used multinomial non-linear regression to tease out the relationships linking music preferences to political behavior, and I’ve used autoregressive models to analyze what events do—and do not—influence US presidential nominating contests. My experience in statistical model-building has also lent itself to the paper’s development of election forecasting models in both the United States and the United Kingdom. In December 2019, I developed an election-night model to predict each party’s chance of winning a majority of seats in the UK parliament live as new results came in. I developed a similar model for the 2018 US House mid-term elections. My works on state-level attitudes about impeachment and the 2016 presidential election were developed using the cutting-edge Bayesian programming language Stan for running models using Markov Chain Monte Carlo, which allowed us to distill national-level survey data to state-level estimates using multilevel regression and post-stratification. These tools are part of our cutting-edge offerings in data journalism today, and I hope to develop them further in the future.
A benefit of The Economist’s data team’s relatively small size for a big paper is that I get to work closely with all of my colleagues. Most of my work is thus created in collaboration with my peers. And though I am but a cog in the wheel of our data journalism team, I have helped the paper push the boundaries of what is possible with data journalism.
I see my job as asking normatively important questions about how people and their governments work together and espousing the answers for our readers. I hope to continue helping readers learn more about the world around them through data. I believe my endeavors in this mission so far warrant consideration for this year’s award for Young Journalist.
Description of portfolio:
Throughout the last year, I have produced ambitious data-driven stories across a variety of subjects. By far the one that took the most amount of work, and of which I am most proud, is our work in exploring how the 2016 election could have gone differently if more voters turned out to the polls. We found that higher turnout benefits Democrats, but that they face a disadvantage in the electoral college because the voters that live in swing states are more Republican than the ones who live in large, non-competitive states. This project took months of work; we had to clean thousands of rows of raw census data and estimated nested multi-level regression models for both voter turnout and voter preferences. We turned to a sophisticated statistical language called Stan to run Bayesian models that were too complex and high-dimensional for our normals tools. In the end, were used these data to create an engaging interactive data visualization for Americans to explore how their behaviors impact the world around them (sometimes in big, consequential ways!).
I used similar modeling in December to analyze the ways that the pro-Republican bias of America’s senate influenced the (still-ongoing) impeachment of President Donald Trump. Because smaller states have more rural and white voters, who lean Republican, they tend to elect Republican senators. These small states enjoy the same representation in the Senate as big states, however, which distorts the will of the American people.
Both of these projects used sophisticated analyses of polling microdata—tens of thousands of interviews with Americans—to derive insights into how America’s systems of representation work, because it’s the states that matter, not the nation as a whole. This is a contribution that few other newspapers have provided to our understanding of American politics.
I also worked on global pieces. My work on country-by-country emissions efficiency revealed, puzzlingly, that China is surprisingly carbon-efficient at its point in economic development than most western nations were at the same point in their histories. It’s the sheer size of the country that’s the matter.
I also used data that researchers living near Mount Everest hand-collected over the past 100 years to show how commercialization has simultaneously crowded the mountain and made climbing much safer than before. We published this piece just after news broke of massive crowding on the route to Everest’s summit, making the article both timely and helpful for people trying to understand the broader story.
My work on the 2020 Democratic primary also began in mid-2019. It required building complex models of polling that allow readers to see the uncertainty underlying the race for president. Our models are new to the scene, so-to-speak, not least because we’ve chosen more complex statistical processes (Dirichlet distributions with splines, anyone?) to better represent both the underlying data and the theories of public opinion. Along with my visualization-focused colleagues, we built a pipeline for data gathering, modeling, and visualization that will help our readers understand the Democratic primary throughout the race.
In December, I also built a live election-night model of Britain’s 2019 parliamentary election. It helped readers understand that early projections from the exit poll and BBC were likely to change. I published a methodology for this article, as well as the analysis of voting in 2016, on our Medium blog.
Project links:
www.economist.com/graphic-detail/2019/05/11/how-mount-everest-went-mainstream
projects.economist.com/democratic-primaries-2020/
medium.economist.com/would-donald-trump-be-president-if-all-americans-actually-voted-95c4f960798
medium.economist.com/forecasting-britains-election-in-real-time-bfcb8d395fa2