Disaster After Disaster

Entry type: Single project

Country/area: United States

Publishing organisation: ProPublica, The Times-Picayune | The Advocate, WWL-TV

Organisation size: Big

Publication date: 2022-12-11

Language: English

Authors: Jeff Adelson, Sophie Chou, Sophia Germer, Chris Granger, Lena Groeger, David Hammer, Joel Jacobs, Dan Swenson, Nina Tran, Richard Webster


Jeff Adelson is a data reporter at The Times-Picayune | The Advocate.

Sophie Chou and Joel Jacobs are data reporters at ProPublica.

Lena Groeger is the graphics director at ProPublica.

Sophia Germer and Chris Granger are photojournalists at The Times-Picayune | The Advocate.

David Hammer is an investigative reporter at WWL-TV who started covering the Road Home program for The Times-Picayune in 2006.

Dan Swenson is graphics editor at The Times-Picayune | The Advocate.

Nina Tran is a software engineer at The Times-Picayune | The Advocate.

Richard A. Webster is an investigative reporter who participated in ProPublica’s Local Reporting Network.

Project description:

“Disaster After Disaster” is a three-newsroom investigation into the tapestry of dysfunction that has characterized how the U.S. aids disaster survivors, told through the lens of failures and inequities in the programs implemented in Louisiana after Hurricane Katrina. Through a novel analysis of state and federal data, we showed that homeowners in poorer and less-white portions of Louisiana, particularly New Orleans, received a smaller share of the resources they needed.

Impact reached:

The impacts of “Disaster After Disaster” began almost immediately after the first story was published.

The initial story focused on people facing lawsuits after they used post-Katrina aid through the Road Home program to repair damage to their houses, rather than elevate them to prevent future flooding. Many of those homeowners had been reassured by Road Home representatives that such repairs were allowed.

A decade and a half later, about 3,500 grantees — or 1 of every 9 households that received elevation grants — faced lawsuits for spending money to make their homes livable. Confronted with evidence from our reporting showing that collection efforts were accelerating, and that these lawsuits mostly targeted residents of poor and Black neighborhoods, the state paused collections as it sought federal approval to drop the suits altogether.

The final story of our project has shown signs of potential impact as well. An analysis of the Road Home’s main program, which made payments for repairs to 119,000 Louisiana residents, showed that its grant formula resulted in homeowners in poorer and less white parts of the state getting a smaller share of the resources they needed to rebuild. In the course of our reporting, architects of the Road Home program acknowledged that shortcoming, and one suggested the state should seek federal funding to provide more money to those who had been shortchanged.

Techniques/technologies used:

Most of the analyses for this project were conducted in Python, primarily using the Pandas library for analysis. All portions of “Disaster After Disaster” involved linking state-generated data about the Road Home program to U.S. Census Bureau statistics on income, poverty and race at the census block group or tract level.

For some analyses, we geocoded addresses using Python and the APIs of various public geocoding services.

We used QGIS and GeoPandas for geographic data exploration and to generate preliminary graphics. These tools also were used to generate files for an interactive map produced using Mapbox.

To analyze the Road Home rebuilding grants, we needed to develop a way to compare payments made to homeowners whose houses differed in size and value. To do so, we developed a metric that compared the total cost of repairing or rebuilding a home (as estimated by state agencies) to the total amount of Road Home funding, other disaster assistance and insurance money the homeowner received.

This allowed us to calculate how close the Road Home program came to meeting its goal, which was to cover the gap between the cost of repairs and money received from insurance and FEMA. The total dollar amounts were aggregated over census block groups so income and race data could be used in the analysis.

Additional analysis was conducted to compare the impacts of various measures that officials had put in place to mitigate inequities. While those steps had an effect, they did not fully solve the problem, our analysis showed.

Context about the project:

The Road Home program — the largest home rebuilding project in U.S. history — was hugely consequential in the remaking of New Orleans and the surrounding area after Hurricane Katrina. The $9 billion effort was designed to ensure that residents who wanted to return to their homes after the flood were able to do so. But the program suffered from a fatal flaw: It capped payments at property value rather than the cost of rebuilding. This meant that wealthier areas, which had higher property values, got more of what they needed, while poorer areas got less.

Some neighborhoods rebounded quickly, while others languished. Today, New Orleans has just 80% of its pre-Katrina population, and some of the neighborhoods where homeowners got less of what they needed are still hollowed out.

Though this approach was never used again in other recovery programs, the failure we highlighted stems from the same philosophy that hampers U.S. disaster response to this day. Programs are primarily designed to prevent recipients from getting more than their property was worth beforehand, rather than ensuring they get what they need to recover.

From the outset 16 years ago, people complained that the Road Home program was unfair. But before our reporting, those allegations were never proven, even after a federal lawsuit, news reports and years of advocacy. The key hurdle was a lack of solid data: State officials refused to share a database with detailed information on payments, and there was no other way to get a comprehensive look at the program.

After persistent attempts by our team, the state eventually relented and released portions of the data. Reporters identified problems with that data as they cleaned and analyzed it. Through an iterative process, the state made additional concessions, and reporters eventually obtained nearly the entire dataset.

It was only at that point that a full analysis could be conducted and a full accounting of the Road Home program finally be brought to light. The project revealed that long-held beliefs about the program were well-founded and provided valuable context for the ongoing patterns of displacement and poverty in New Orleans.

What can other journalists learn from this project?

The “Disaster After Disaster” project provided several lessons to our team, both in reporting and data analysis.

The initial dataset of records of Road Home recipients we received from the state of Louisiana, aggregated on the Census block level, contained errors and omissions that could have led to misinterpretation. After we shared our initial findings, state officials claimed that our analysis was both incorrect and impossible based on the dataset they provided. They were at first unwilling to give us better data.

We analyzed key variables using Pandas and Python, looking at their distribution and highlighting cases where these variables did not add up or led to logically impossible outcomes. This analysis, plus persistent reporting, eventually convinced the state to send better, cleaner data. In the end we received anonymized, individual-level data with additional variables to address the errors — something the state initially refused to provide.

This would not have been possible if we had taken the dataset at face value. Instead, we worked to understand the dataset and the architecture of the records so we could tell government employees how to properly query their own data.

Additionally, we consulted with a Census expert to make sure we correctly joined race and socioeconomic data to our Road Home dataset. Because we were working with vintage Census data, we had to sift through old documentation and academic papers that explained how to use and aggregate the data at a block level. It was through this process that we realized that we needed to use race data from sampled Census data instead of the main survey data so we could correctly match it up with the socioeconomic data.

Project links: