Newsworthy’s corona coverage
Organisation size: Small
Newsworthy is a Swedish news service dedicated to local data-driven journalism. We want to nominate our 2020 coverage of the coronavirus, which has helped local newsrooms, public servants and the general public understand the pandemic. We think our portfolio is worthy of an award because of its innovation in methods and high journalistic ambitions.
Newsworthy is built upon a highly automated technological platform for data collection, analysis and text generation. With NLG technologies we are able to write articles that take hundreds of local versions and that re-write themselves whenever new data is published.
This allows us to produce a large amount of high quality data journalism with very limited resources. At the moment of writing Newsworthy employs three journalists.
Newsworthy was founded in 2016 by Journalism++ Stockholm after receiving a grant from Google’s Digital News Initiative. The initial ambition was to serve local newsrooms with automated story leads found through statistical trend and anomaly detection. Since then Newsworthy has found a broader audience among local civil servants and elected representatives, as well as the general public.
As data journalists our mission is to understand the world and make the world understandable through the lens of data. We want to empower organizations such as small newsrooms that lack the resources and skills to collect and interpret data.
This is achieved through automation. We build pipelines that collect data, determine newsworthiness and ultimately produce ready-to-publish articles. However, we use the buzzword “robot journalism” carefully. We want to make journalism that reacts to real world events, interprets numbers (with human input!) and says something meaningful – for example about the coronavirus pandemic. We see ourselves as a little more “traditional” than some of the actors offering fully automated media products, yet clearly more automated than traditional data journalism organizations.
Much of the challenge in developing a service like Newsworthy has revolved around business, rather than content development. We have been able to grow with a business model founded in commissioned work supporting our own, independent reporting. This model allows us to keep the news service openly available to organizations, professionals and individuals with limited resources.
Description of portfolio:
With this nomination we specifically want to highlight our coverage of the covid-19 pandemic. The coronavirus news cycle was, in many ways, tailored for our highly automated workflow. Every day tonnes of new data was published – data that needed to be interpreted and put in sensible context. Our coverage has consisted of:
Daily/weekly articles about the spread of the virus in all 290 Swedish municipalities.
Weekly analysis on mobility trends.
Reports on the mounting effects of the pandemic on healthcare waits.
Weekly and monthly articles about the economic effects in unemployment, company liquidations, housing prices etc.
Pan-European comparisons of regional excess deaths.
All in all our small team has been able to produce tens of thousands of articles describing the impact of the coronavirus on society.
We are especially proud of our monitoring of regional excess deaths. In three larger publications we have shown how differently regions across Europe have been affected by the pandemic. While some regions have been largely untouched in terms of deaths, others have seen unprecedented excess death. This project was produced as part of the European Data Journalism Network and published on media outlets across the continent. These are just a few: SvD (SWE), YLE (FIN), Alternatives Economiques (FRA), OBCT (ITA).
Our coverage was much appreciated in small and hyperlocal newsrooms and among local decision-makers. The number of subscribers increased threefold in just nine months. In November Newsworthy was named one of Sweden’s top 5 news sites by IDG.se.
One of the challenges in the project has been to maintain data quality. Early on we were dependent on scraping more than 20 regional websites every day to get the latest number of hospitalized patients. At least one or two of these scrapers would break on a daily basis. And regions tended to report their numbers slightly differently, making regional comparisons rather challenging.