2020
Data-driven videos with D3
Category: Innovation (small and large newsrooms)
Country/area: United States
Organisation: The Pudding
Organisation size: Small
Publication date: 28/03/2019

Credit: Russell Goldenberg, Jan Diehm
Project description:
At The Pudding, we’re known for our data-driven visual essays. At first, that was synonymous with scrollytelling where we would interweave text and graphics, but in 2019 we also expanded into video storytelling. None of us were video editting experts, so we created a command line interface tool that allowed us to program in Javascript and D3.js and export to video — no extra skills required.
Impact reached:
The tool allowed us to build our first-ever gallery installation in partnership with the Smithsonian’s National Potrait Gallery. For the 100 anniversary of women’s suffrage, we examined the number of women-related words in political party platforms. The result was this video (displayed in person at the gallery exhibit in DC), and an accompanying online Spanish translation video.
We have since used it to build two more videos — one on the NBA’s 3-second rule and one on yearbook hairstyles — and we will continue to use it for future projects.
Techniques/technologies used:
Our open source CLI tool was built off of previous open work by Adam Pierce and Noah Veltman — a real testement to the industry’s collective power. The tool generates videos from a locally running server using D3.js to control time. Javascript animations are captured, saved as individual jpegs, and then stiched together in a video using ffmpeg.
What was the hardest part of this project?
We began building his tool in tandem with building our first video around women’s issues in political party platforms. It was a trial by fire experience and we often had to rework and retool the infrastructure of the build to accomodate how we wanted to animate different elements. While that sometimes created frustrating situations where we felt like we were duplicating work, it also gave us the space to really design for all use cases. The end result is a battle-tested tool with clear documentation and examples. It’s given us a new storytelling technique in our arsenal and we hope it has empowered others to wade into areas that they once thought weren’t a match for their skillset.
What can others learn from this project?
We’re a group of journalist-engineers who are constantly trying to push the boundaries of storytelling. We know that what makes for a good text-driven story might work as a photo-driven story, or a data-driven story, or a video-driven story. This was an exercise in pacing and audience attention for us. While scrollytelling requires the user to take a somewhat active approach to digest content, videos are a more passive experience. Scrollytelling and video has a lot of overlap — in both you’re leading the audience through a linear experience. With scrollytelling that often means presenting one element at a time, while in video the motion and presentation is much more overlapped. For the videos, we also had to consider things like how sound interacted with motion and how to calculate optimal reading time. We have the technical tools to seemlessly switch from medium to medium, but we should also build out our thoughtfulness in other areas.
Project links:
github.com/russellgoldenberg/render-d3-video
www.youtube.com/watch?v=-DXKDw8l0wY
pudding.cool/2019/05/three-seconds/