Operation Lone Star

Entry type: Single project

Country/area: United States

Publishing organisation: Texas Tribune, ProPublica, The Marshall Project, The Military Times

Organisation size: Big

Publication date: 2022-03-21

Language: English

Authors: Perla Trevizo, Lomi Kriel, Andrew Rodriguez Calderón, Jolie McCullough, Keri Blakinger, James Barragán, Davis Winkie, Marilyn Thompson


Perla Trevizo is a reporter with the Texas Tribune/ProPublica Investigative Unit.

Lomi Kriel is a reporter with the Texas Tribune/ProPublica Investigative Unit.

Andrew Rodriguez Calderón is a computational journalist for The Marshall Project.

Jolie McCullough is a reporter for The Texas Tribune.

Keri Blakinger is a reporter for The Marshall Project.

James Barragán is a reporter for The Texas Tribune.

Davis Winkie is a reporter for the Military Times.

Marilyn Thompson is a reporter for ProPublica.

Project description:

In March 2021, Texas Gov. Greg Abbott launched a multibillion-dollar border crackdown that quickly became central to his reelection campaign. The governor repeatedly proclaimed the success of the initiative, dubbed Operation Lone Star. Operation Lone Star, he said, had made 11,000 criminal arrests and caught millions of lethal doses of drugs.

But this was not true, an investigation by The Texas Tribune, in collaboration with ProPublica, The Marshall Project and The Military Times, found.

Impact reached:

Through dogged reporting, Perla Trevizo, Lomi Kriel, Andrew Rodriguez Calderón and Keri Blakinger exposed that the state’s claim of success was based on shifting metrics that included crimes with no connection to the border and drug seizures and arrests from counties that received no additional funding or resources from the operation. They uncovered that the state was bolstering its numbers by including cases such as that of Thomas King-Randall, a Black man who lived 250 miles from the border and was arrested on a family violence assault charge.

Reporter Jolie McCullough’s work showed, for the first time, how misdemeanor trespassing charges against migrants entering the country quickly became the largest share of the operation’s arrests despite the governor’s claims that it was capturing dangerous criminals.

After months of questioning from reporters, the state Department of Public Safety (DPS) acknowledged that it had incorporated arrests with no connection to the border in its metrics of success, promoting inflated numbers of criminals the operation claimed to have nabbed. The agency stopped counting more than 2,000 charges, including some for cockfighting, sexual assault and stalking. Of those, about 270 charges were for violent crimes, which are defined by the FBI as murder, manslaughter, rape, robbery and aggravated assault.

The state’s largest newspapers, including The Dallas Morning News and The Austin American-Statesman, published editorials calling for accountability.

In July, The Texas Tribune, through its joint investigative unit with ProPublica, broke news that the Department of Justice had launched an investigation into allegations of civil rights abuses that had come to light through Operation Lone Star reporting.

Techniques/technologies used:

Public records requests were the primary engine of data acquisition.

Python — with help from the Pandas library — was used as the primary language for cleaning and processing the data, with orchestration provided by GNU Make and Amazon Web Services for cloud storage.

We used a Javascript environment for visualization and analysis. Our toolchain includes Arquero for data analysis, SVGs generated with D3 for mapping and charting, and Observable notebooks for sharing with collaborators.

Context about the project:

The two biggest challenges with the data were the fact that with every request, it was clear that the guidelines for data collection were changing and that charges were not standardized nor input in a consistent way.

This made cleaning the data an exercise in disentangling monthly records and then reconciling them to create a coherent picture of Operation Lone Star’s evolution. What’s more, when we asked DPS to clarify the inconsistencies in the data, officials often didn’t help us better understand why changes were being made. At one point, they told us that the data was inscrutable and unanalyzable even as the agency published monthly reports using the very same data.

We overcame this issue in part by using anti-joins to compare data sets over time to make sure that we understood how the data was changing from one month to the next. We also put together a manual protocol with all reporters to inspect the data, row by row, to make sure that our code was parsing the charges in accurate, consistent ways. The combination of experience, subject matter expertise and additional reporting along with automation ensured that we could explain our methodology and decision-making at every point in the data processing pipeline.

What can other journalists learn from this project?

People in the data journalism community can learn from our analytical techniques and how they drove our reporting.

We started with a classic journalistic question. The state government says Operation Lone Star has been a success, do they have the evidence to back it up?

However, the evidence itself changed as the reporting unfolded. We decided, then, to analyze how each dataset provided by the state was different from the last. This approach, described above, gave us a way to contend with an agency that was changing its definitions as we reported the story.

Before publication, we shared our findings with DPS and asked for a response. Agency officials said we needed to account for the fact that “each spreadsheet represents an extract from a live database, and information is subject to change.” They challenged our characterization of records being added or removed, saying we assumed “that any row that does not appear exactly the same in each spreadsheet can be described as either ‘added’ or ‘removed.’”

We went back through our analysis and verified its accuracy and again shared our findings with DPS, explaining how we reached our conclusions and asking the agency to point specifically to any inaccuracies. It did not.

We wrote a detailed methods box, included with the story, that explained our process and quoted the department’s criticisms. DPS did not challenge our findings after publication.

Project links: