Big little lies
Category: Best data-driven reporting (small and large newsrooms)
Country/area: United Kingdom
Organisation size: Small
Publication date: 12 Nov 2019
Credit: Matthew d’Ancona, Basia Cummings, Ella Hollowood, Chris Newell
We decided to track and visualise all the ‘untruths’ told by parties and politicians during the UK general election in 2019.
Using the UK’s leading fact-checking organisations as our source, we began tracking untruths from the start of the election (6 November) up to two days before polls opened (10 December).
The result is an interactive timeline of each untruth which readers can tap to discover the claim made, who made it, on what platform and the fact-checker’s verdict. We also gave each untruth a ‘severity’ score so that we could assess where the ‘worst’ untruths were coming from.
The piece – and screenshots of the graphics – went viral on Twitter the day before the election. Tortoise’s tweets about the article received around 800,000 impressions in total. The evening the piece was published, it featured as a top UK news item on Twitter, before being shown on TV show, BBC Outside Source.
What was the hardest part of this project?
Working out a way to systematically measure lying.
The first challenge here was the most fundamental: establishing what actually counts as an “untruth”. We ended up using a broad definition to cover any statement, manipulation or misrepresentation where politicians strayed from a known truth, ranging from misleading remarks to outright lies.
The next challenge was to fairly assess the severity of each lie. We came up with a scoring system that took into account both the significance of the original claim being made (For example, is this about a relatively trivial matter, such as a post-Olympic baby boom, or a major electoral issue that will likely affect how people vote, such as spending on the NHS?) and the untruthfulness of the lie. If the claim was untrue simply because no one knows the real answer we gave it a ‘1’. If it’s just factually wrong we gave it ‘2’. And if there’s a strong reason to believe this was a more deliberate act to deceive or distort the truth then we gave it a ‘3’. We then multiplied the significance score by the untruthfulness score to get an overall severity score.
What can others learn from this project?
You shouldn’t be put off from trying to measure something that might seem ‘unmeasurable’ – the key thing is come up with a systematic criteria, sense-check it with others and be upfront about the methodology with the reader.