On Dec 2, 2019, The Star published a special report made up of two stories, namely: 1. A data-story on flood hotspots in Kuala Lumpur and its surrounding districts (collectively referred to a Flash floods are a longstanding problem for the Malaysian city of Kuala Lumpur and its surrounding areas, collectively is known as the Klang Valley. The floods, which often occur after sudden heavy downpours are due to poorly planned development. The aim of the project was to show readers which areas in the Klang Valley are most flood prone and highlight the importance of prioritising flood mitigation in
Data-driven content with interactive visuals is very new for The Star, and the special report is one of our earliest attempts at it.
We received a good response from our readers in terms of pageviews as well as social media comments. This particular project gave us the motivation to continue learning and scale up our data and visual stories.
Flood hotspot story:
We analysed four years of flood data recorded by Malaysia’s Department of Irrigation and Drainage.
Data analysis was done on Microsoft Excel, then mapped and visualised using Flourish.
We also embedded a Google Earth Timelapse image of the Klang Valley to show the rapid development in the area over the years.
A video story was also created by our video or the flood hotspot story, with video effects done using Adobe.
Sea-level rise story:
We put together the story, which comprises of interviews with affected padi farmers, photos and videos using Shorthand.
An interactive graphic in the story was done using Genial.ly. We used the free versions of Shorthand and Genial.ly after learning doing some research on which tools we could use, and then learning how to use these tools on out own.
What was the hardest part of this project?
For the flood hotspot story, data scraping and cleanup was done by a two person team and it was the hardest part of the project. The data comprised of flood incident reports by Malaysia’s Department of Irrigation and Drainage.
The data is in pdf tables, so we had to scrape using Tabula. We then had to clean up the data, then geocode all the addresses and locations in order to map it out.
We scraped, cleaned up and analysed four years of data not just for the Klang Valley but for all of Malaysia.
This is because we wanted to do another our story which looked at flood prone areas in the east coast of Malaysia.
However, since that story on flood prone areas in the east coast of Malaysia (https://www.thestar.com.my/news/nation/2020/01/02/34-deaths-and-rm153mil-in-losses) was only published on January 2, 2020, we are not able to submit it.
As such our entry only comprises of two stories from our special report, both of which were published on Dec 2, 2019, which is before the deadline for entries.
We hope our special report will be considered because it is part of our effort to raise awareness on the importance of the environment and climate change, topics which news organisations in Malaysia do not report on enough.
What can others learn from this project?
I am a beginner in data journalism. I learnt data journalism on my own, mostly online other than attending a four-day journalism training conducted by Malaysiakini.
It was quite scary to promise my bosses that I could learn data journalism, create data-driven content and also help my colleagues with it – part of my pitch for our news organisation can adopt data journalism.
I made mistakes but I am learning from them. So, if the question is “what can others learn from this project,” then my answer is that if I can do this much armed only with a deep interest in data journalism and a refusal to give up, I believe that all other journalists who are considering learning data journalism and applying it can do as well, and even better!