2020

Big little lies

Category: Best data-driven reporting (small and large newsrooms)

Country/area: United Kingdom

Organisation: Tortoise

Organisation size: Small

Publication date: 12 Nov 2019

Credit: Matthew d’Ancona, Basia Cummings, Ella Hollowood, Chris Newell

Project description:

We decided to track and visualise all the ‘untruths’ told by parties and politicians during the UK general election in 2019.

Using the UK’s leading fact-checking organisations as our source, we began tracking untruths from the start of the election (6 November) up to two days before polls opened (10 December).

The result is an interactive timeline of each untruth which readers can tap to discover the claim made, who made it, on what platform and the fact-checker’s verdict. We also gave each untruth a ‘severity’ score so that we could assess where the ‘worst’ untruths were coming from.
 

Impact reached:

The piece – and screenshots of the graphics – went viral on Twitter the day before the election. Tortoise’s tweets about the article received around 800,000 impressions in total. The evening the piece was published, it featured as a top UK news item on Twitter, before being shown on TV show, BBC Outside Source.

Techniques/technologies used:

We began by getting a list of all the URLs from the fact-checking site’s base URLs (eg “fullfact.org/election-2019”) using ‘xml-sitemaps.com’. We then converted these xml files to csv, scraped the web pages using rvest to get key information (like date published) and filtered out any articles that were published before the election began. The URLs and dates we were left with was essentially a list of all the lies we would assess. We then described, categorised and scored each one manually in a Google spreadsheet. To create the visualization, we used JavaScript (D3) and Flourish.

What was the hardest part of this project?

Working out a way to systematically measure lying.

The first challenge here was the most fundamental: establishing what actually counts as an “untruth”. We ended up using a broad definition to cover any statement, manipulation or misrepresentation where politicians strayed from a known truth, ranging from misleading remarks to outright lies.

The next challenge was to fairly assess the severity of each lie. We came up with a scoring system that took into account both the significance of the original claim being made (For example, is this about a relatively trivial matter, such as a post-Olympic baby boom, or a major electoral issue that will likely affect how people vote, such as spending on the NHS?) and the untruthfulness of the lie. If the claim was untrue simply because no one knows the real answer we gave it a ‘1’. If it’s just factually wrong we gave it ‘2’. And if there’s a strong reason to believe this was a more deliberate act to deceive or distort the truth then we gave it a ‘3’. We then multiplied the significance score by the untruthfulness score to get an overall severity score.
 

What can others learn from this project?

You shouldn’t be put off from trying to measure something that might seem ‘unmeasurable’ – the key thing is come up with a systematic criteria, sense-check it with others and be upfront about the methodology with the reader.

Project links:

members.tortoisemedia.com/?article=page-111193&edition=com.tortoisemedia.tortoise.timelinetoday_tortoise_today

twitter.com/caitlinmoran/status/1204722900965744644

twitter.com/DeborahMeaden/status/1204751841797562368

twitter.com/HPIAndyCowper/status/1204710227242897408