2023 Shortlist
Broken Adoptions
Entry type: Single project
Country/area: United States
Publishing organisation: USA TODAY
Organisation size: Big
Publication date: 2022-05-19
Language: English
Authors: Aleszu Bajak,Marisa Kwiatkowski,Ramon Padilla,Javier Zarracina

Biography:
Aleszu Bajak in 2022 was a USA TODAY data reporter. He was an MIT Knight Science Journalism Fellow and a writer at Undark magazine. He has taught and managed graduate journalism programs at Northeastern University and is now the Urban Institute’s director of data visualization.
Marisa Kwiatkowski is a USA TODAY investigative reporter and previously worked for media outlets in Michigan, South Carolina and Indiana. Her work has spurred investigations, criminal charges, resignations and changes to federal law and state policy. She and IndyStar colleagues earned an IRE award for reporting on USA Gymnastics’ handling of child sexual abuse allegations.
Project description:
Broken Adoptions revealed how and why adoptions fail, creating anguish for everyone involved. It tapped a massive federal database of foster children to quantify the problem and identify, using statistical regression, key risk factors for failure. The project highlighted how states have been allowed to supply bad data that undermines any effort to judge the efficacy of $3 billion a year in federal adoption subsidies.
Impact reached:
This project exposed breakdowns at every point in the adoption process. Many parents wait years for children who never arrive thanks to failures by adoption agencies. Social workers and agencies lie about children’s backgrounds to move their adoptions along faster. And, when something goes wrong, a common pattern emerges: Parents, prospective parents and the children often find themselves without support or recourse.
The project produced the first (and highly conservative) estimate of failed adoptions in America. It also identified major shortcomings in the collection of data on children returned to foster care from failed adoptions, pointing the way toward potential reforms.
There was an overwhelming response from readers. The series was one of the top subscription-drivers for USA TODAY all year, and subscriber pageviews also ranked highly. Each installment of Broken Adoptions was followed by an outpouring of appreciation, from social workers and policymakers, parents and adoptees around the country. Some became sources for the ongoing work. Others just wanted to thank the reporters for uncovering such a hidden tragedy and making them feel seen.
The series also set the stage for data reporters at other organizations to plumb the Adoption and Foster Care Analysis Reporting System for accountability stories no one has yet told. AFCARS can tell how long a child remains in foster care, how many are diverted for years to group homes state-by-state, and how these outcomes vary by race, ethnicity, gender and age of the child.
Techniques/technologies used:
We learned early on that no one had quantified how many U.S. adoptions fail. Building on work of previous USA TODAY reporters, Aleszu Bajak scoured the obscure Adoption and Foster Care Analysis and Reporting System, or AFCARS. The multi-million-record database shows the status of each child in foster care each year, including whether a child was previously adopted. Although unique identifiers are meant to link records across years, Bajak found states inconsistent about flagging a kid’s past adoption and preserving IDs. Any responsible estimate required painstakingly weeding out nonsensical cases, such as those that went from “Adopted” to “Never adopted” in one year.
Tallying up 66,000 failed adoptions was only the start, however. Bajak wanted to know why they failed. The breakthrough came when he unearthed a study finding 16 states with reliable IDs in AFCARS over time. Suddenly, we could track trips from foster care to adoption and back with confidence. Bajak decided to see what happened to a cohort adopted from 2008 to 2010. Using Cox proportional hazards regression, Bajak found statistically significant, independent risk factors predicting failure. The results might help child welfare workers know which kids need the most support at the time of adoption placement. Yet no one had done this analysis to tell them.
The reporting begged one last question. Why did states so frequently delete child identifiers from AFCARS, seemingly breaking from guidance laid down for this federally funded database? It was one of those irritating data flaws that, in this case, had newsworthy consequences. Bajak tracked down the bureaucrat who designed AFCARS three decades ago, and she had a lot to say. Through this and other historical research, Bajak showed how a deliberately obfuscated dataset undermines our ability to see whether billions in adoption subsidies are working.
Context about the project:
The federal data underlying this series included records of 3.4 million children who spent time in foster care from 2008 to 2020, culled from the Adoption and Foster Care Analysis and Reporting System. It was complex and dirty, requiring months of effort to wrangle down.
The database was enormous, requiring us to first pare down pieces we needed using Google Big Query. Although extensive guidance exists for states to follow when submitting data, our quality checks (and numerous federal audits) showed their submissions often go awry. All children entering foster care are supposed to be flagged if they’ve been through a prior adoption, but reporters detected many erroneous flags and had to painstakingly weed them out. Washington officials acknowledged and promised to fix a major flaw in their reporting system that we brought to their attention.
Using and describing the results of a Cox proportional hazards regression was also an enormous challenge. Aleszu Bajak learned from his research that this method was appropriate for our longitudinal dataset, but he wanted to be confident he did it right. For this, he consulted closely and repeatdly with an array of statisticians who understand the technique.
To present the data in an accessible way, we worked with USA TODAY’s Storytelling Studio to create “scrollytelling” animations that wove together dataviz and illustrations in narrative form. These scrollies were built using Adobe Illustrator, After Effects and Svelte.
In story form, the reporters breathed life into their data and findings by telling deeply personal narratives of adoptive parents and adoptees themselves. Those stories came through extensive efforts to seek out those affected, including creating an online questionnaire for adoptees, adoptive parents and officials and mailing postcards to thousands of adoption center clients with a Q/R code thatlinked to a survey.
Building trust took time and determination as did unearthing records to bolster – or sometimes debunk – shaky memories of childhood and trauma.
One article recounted the tragedy of a family adopting out of foster care who could not get government approval for the mental health care their daughter needed. Before her adoption became final, 11-year-old Becca took her own life.
Another looked back at a California-based adoption agency that went bankrupt, leaving hundreds of families in the lurch. The investigation found the Independent Adoption Center’s leadership knew what was coming but continued to enroll new clients up to the week it closed itsdoors. California state officials knew, too, but their investigation went nowhere.
The final installment traveled home to Africa with Charles, whose American adoptive mother discovered she had been misled: His biological mother had not agreed to give him up and hewas four years older than his records indicated. The State Department and the adoption agency offered no help.
Because reliable resources are hard for adoptive parents to come by, USA TODAY published links to nonprofit advocacy groups that specialize in this field as well as testimonials from adult adoptees who have drawn lessons from their own broken adoptions to help others fare better.
What can other journalists learn from this project?
Understanding the shortcomings of your data is essential. Get to know the people who built a database, the user manuals for people who submit data to it, the academic experts who use it for research and the process that its owners use to test it for errors. We made all of these contacts, and it ensured we were on solid ground.
Seek out a second set of eyes for your analysis, if you can. We ran all of our regressions by experts in pure statistics, not just people who study this particular type of data. We also had a second data reporter internally review the R analytical code for potential flaws.
Find ways to keep numbers to a minimum in your story text. Usually, data results are better told visually, as we did with our series of “scrollies” that animated our data visualizations.
Project links: