I have delivered over five Telegraph front pages on rising crimes and declining charge rates in the UK through the use of open data and freedom of information. The UK’s Home Office publishes quarterly and annual data on the number of crimes and the police outcomes of each of them – broken down by police force area and crime type. This is a wealth of data which I’ve used to write multiple public interest stories on the increasing crime problems in the country. They are topics that interest our readers, and so we use data journalism to surface these stories.
My data-led stories and interactives on crime have highlighted how Government decisions have directly impacted the likelihood of catching criminals. This is something that Telegraph readers, and wider society, care a lot about.
We created two interactives this year on the topic – one of which is a tool in which our subscribers can type in their postcode and find out how the story impacts themselves. The reader can find out how many crimes take place in their area, how many end up in the criminal being charged and how this rate compares to the national average.
This is what we try and do at The Telegraph’s Data Journalism team. We aim to surface exclusive news stories that our readers will devour on the front-page, but also combine that with interactives that have genuine utility and connection to our readers’ lives. The impact of this is clear: we can surface important public interest stories, and show personalised data to our reader. This gives them the tools to hold their local authorities to account.
Most of my data analysis is done in either R or Microsoft Excel, depending on the size of the dataset and how big the story seems. Dplyr, tidyverse and ggplot are my go-to tools in R. Static visualisations are created with ggplot and Adobe Illustrator, while interactives are built with D3.
What was the hardest part of this project?
Collaboration with other journalists in the newsroom who aren’t data specialists can be a challenge. In all of these crime stories, I’ve worked with a journalist who specialises in a crime or home affairs patch. This enables us to improve the story by having all skillsets covered.
Previously, when I started out as a data journalist, there were plenty of times when people didn’t understand what I did. The usual response was usually one of two: assuming I was a number cruncher to help research reporters’ stories, or assuming that I was an extension of the graphics team – again, for other reporters’ stories.
It took me a while to establish that I as a data journalist was indeed a journalist in my own right. That my primary function was to source stories in exactly the same way as any other reporter, and that the only real difference was what my primary source of my stories were.
The way myself and, then, the wider Data Journalism team tackled this was simply to get our heads down and produce good stories. The more we delivered exclusives for the newspaper, and the more we produced innovative visuals for the website, the more people understood what we did and how we can collaborate.
What can others learn from this project?
Open data is a treasure trove of stories. Most journalists overlook this source, as it’s often hidden behind multiple tabs in a spreadsheet or simply because the skills to be able to dig into a wall of numbers to find a story still aren’t that widespread in a newsroom. But often a simple pivot table in Microsoft Excel can be the key to a front-page story. If we go a step further and use R to dig into even larger open datasets, we are able to find even more stories and visualise them in innovative ways.