2021 was a super-election year (bumper election year) in Germany, culminating in the Bundestag elections in September. Beginning in January, Funke Mediegruppe’s 18 newspapers provided election-related figures and information in one online dashboard. While the dashboard remained, its content adjusted to three phases:
before the elections the dashboard showed current polls and historical election results
in the election night, maps and new-tickers publishing automated written updates were live
after the elections, the dashboard shared results and analyses using maps and various charts
Additionally, there were five regional variants of basically the same dashboard, but presenting results for smaller regions.
Our dashboards were the backbone of all election coverage of 18 different newspapers. With the national dashboard published in January 2021 already, we were the first data journalism team in Germany to prepare the elections and overall, had almost one million visits across the different regional dashboards before, during and after election night in 2021.
On top of that, many more articles of the company’s different newspapers either based their reporting on our work or even embedded parts of the dashboards as modular iframes into online articles, increasing the reach of our work.
Our graphics and data were also adapted to multiple print graphics, for example the maps we created from very granular and hard to convert historical election results (link 7). Visualizations from our dashboards were featured by multiple blogs (e.g. datawrapper) and data visualization professionals.
The project is based on a state-of-the-art frontend technology stack using React with next.js, Mapbox for maps, emotion for CSS-in-js, and d3 for data visualization. As most of our users visited the page on their smartphones, we paid extra attention to a good user experience on small devices with low bandwidth. Additionally, we split parts of the story into smaller pieces that could be embedded on other sites using iframes.
We created a database with consistent structure that various scrapers (coded in node.js) fed data from different sources (json’s, csv’s, but also emails with spreadsheets that were parsed automatically) and formats into. This one database was then behind all the different localized dashboards.
For the historical as well as analytical parts of the dashboards, large amounts of data needed to be wrangled. In order to be able to display historical election results in comparable geometries, historically different election district geometries had to be refactored. For analytical elements as for example the comparison of election results based on district’s populations (e.g. by age or income) or how rural/urban they were, a lot of additional data was gathered and matched to categorize election result districts in order to be able to make these comparisons automatically (to save time / manual capacities) once the results were complete.
What was the hardest part of this project?
The overall challenge of the election-coverage was the scope of work considering our small four-person team. The difficulty is exacerbated by both the chaos in available data caused by German federalism, as well as catering to 18 different local newspapers that belong to Funke Mediengruppe.
As our product included national-level results, in an ideal world, we would have simply created smaller derivatives for more local regions but based on the same data and source. Unfortunately, the data the federal German election office published, were not granular enough for small-scale maps of e.g. Berlin, Hamburg, or the municipalities of other federal states. Therefore the more granular local data for the regions our newspapers are covering had to be collected from various local offices, each of whom had differing formats and different interruptions during the election night (e.g. data being formatted differently than previously communicated), which needed a lot of fixing and adjusting of numerous scrapers for different locations in real time during election night, while results were flooding in already.
The regional variance in data availability also impacted our production flow, as one of our approaches to make the workload manageable was to basically clone the dashboard we had created for the national overview to create regional derivatives by altering some location parameters but otherwise keeping the (data) structure and functions the same. However, we had to make revisions and adjustments for each location as data was not available consistently across these places. Where in one federal state there were only the election districts available (299 across all of Germany), other federal states had data on the level of municipalities (a few hundred per state) or even over 1000 different neighborhoods, which allowed for a very detailed map in the case of Berlin.
What can others learn from this project?
The project shows that an efficient set-up and preparation make it possible to cover a lot in a short amount of time, even with a very small team. Strategies that helped us achieve these goals included:
using a template structure, which can be copied as a blueprint to various elections and/or geographies and allows for multiple localized editions of content in the same structure (and structuring the data consistently as well)
a modular set-up, that allowed for more flexibility when handling, updating, fixing or maintaining multiple pages at the same time. For example, erroneous modules could be commented out if not high-priority at that very moment of the election night, allowing to attend to more urgent issues without having to shut down a whole page or keep errors online
modules also made it possible to create stand-alone widgets from parts of our dashboards that could in turn be used in other articles and front-page teasers across the company. Without a modular set-up, with the small team we have, it would have been impossible to offer and create interactive visualizations for as many articles and pages as our widgets were used in / embedded into.
using a central data repository with a strict data structure (including polling results, election results, results of individual candidates as well as parties, for various geographies), into which scrapers fed data from various sources. This made it possible to keep the upper hand over complicated data situations where we had to switch between 7 different dashboards within minutes or work on them in parallel for months.
we created non-public, self-service web interfaces for colleagues from other desks to export custom charts with our election data, e.g. for specific municipalities or maps as printable svg’s