2020

Karen Hao

Category: Young journalist

Country/area: United States

Organisation: MIT Technology Review

Organisation size: Small

Cover letter:

Growing up, I didn’t know I wanted to be a journalist. There wasn’t a single one in my Chinese-American community. Instead, I wanted to be a writer because I was obsessed with reading fiction. In high school, I discarded these dreams under the pressure of applying to colleges that could lead to a stable career. So I found my new love: engineering, and headed to MIT.

In 2015, I graduated with a degree in mechanical engineering, and settled in at a comfortable tech job in Silicon Valley. But my nagging desire to write came back to haunt me as I grew increasingly disenchanted by what I saw around me. The engineers of the tech world were rewiring the rules of society without really engaging with and understanding society itself. A year after I started, I left to become a journalist, determined to find a platform for examining and exposing this reality and its consequences.

This is what drew me back to my alma mater to become the artificial intelligence reporter at MIT Technology Review in October 2018. The expansiveness and influence of AI across industries, domains, and aspects of people’s lives provided a perfect way for me to do this kind of examination. I quickly developed my coverage around a driving thesis: technology people (e.g.: researchers and engineers) and humanities people (e.g. lawyers, policymakers, and social scientists) need to talk more to one another. With my background, I can offer them a common language. I can unbox the complicated technical concepts for nontechnical people to understand, as well as demonstrate how the details and decisions made in the process of technology development directly translate to impact on the ground.

As a visual and kinesthetic learner, I also believe there’s more to communication than words. So I am constantly seeking novel ways to utilize my visualization, data analysis, and coding skills to help me storytell more clearly and powerfully along the way.

My voice and these approaches have resonated with my readers. In less than year, the weekly AI newsletter I write for Tech Review, tripled its readership (reaching close to 100,000) and earned me a nomination for best newsletter on the internet in the 2018 Webby Awards. My articles are also referenced in nonprofit, government, and intergovernmental reports and have become assigned reading in many university classrooms, including policy classes at Harvard Kennedy School, law classes at Duke University, and technology ethics classes at Stanford University.

On top of that, I have become the go-to speaker about AI and society on podcasts, radio programs, and stages around the world. Last year, I gave an opening keynote at the Marketing Artificial Intelligence Conference, moderated a panel for the UN Foundation with former US CTO Megan Smith, and chaired the AI and education track at the Middle East’s premiere AI conference in Dubai, just to name a few highlights. In February of this year, I will also take the stage at TEDxGateway, India’s largest ideas festival, in front of an audience of 6,000.

Acknowledging the authoritativeness, influence, and quality of my work, Tech Review fast-tracked my promotion in January to senior artificial intelligence reporter.

I believe, too, that these factors qualify me to be considered for this award. I am less than halfway through my fourth year as a journalist, and I still have so much farther to go. I am determined to continue telling provocative stories that force technology builders and regulators alike to confront its impact on society. And I am committed to preparing them with a common language to do something about it.

Description of portfolio:

I am submitting two samples as part of my data and visual portfolio. I describe them both below.

Can you make AI fairer than a judge? Play our courtroom algorithm game
This is an interactive narrative that shows why decision-making algorithms can’t be completely fair when operating on data from an unfair world. It visualizes real data used by a criminal justice algorithm to predict whether a defendant will be re-arrested. It then challenges readers to adjust the algorithm to make those predictions fairer. But after every adjustment, a new notion of fairness is revealed showing that the outcomes still aren’t fair. This build up leads to the punchline: these notions, in practice, can never be satisfied all at once.

I wanted to do this story to illustrate the complexity of algorithmic discrimination—one of the most important debates currently happening at the intersection of AI and society. To technologists, I wanted to make the point that AI bias is not just a theoretical math problem. To solve it requires grappling with social and historical factors. Data points aren’t just numbers; they’re actually people. To social scientists and policymakers, I wanted to create a tool that helps them think through the nuances of impact and regulation. While the story used criminal justice as a case study, decision-making algorithms are used across different contexts, including in hiring, healthcare, and lending. It’s also timely because the US Congress is currently in the middle of trying to tackle this problem.

I wrote, coded, and designed the visualizations for this story with a collaborator. We conducted every interview, wrote every line of code, and workshopped every sentence together. I principally drove the narrative flow and drafted the original story with the idea of an interactive build up. I also did most of the production coordination becase my collaborator was remote and not part of the publication.

To date, the story has received over 80,000 page views, with an average read time of 2:20 minutes. It has been publicly commended by leading experts and institutions like AI Now, Azeem Azhar, Janelle Shane, and Mary Gray. It has also been featured on radio and podcasts and become assigned reading in university courses, including at Stanford University.

 

We analyzed 16,625 papers to figure out where AI is headed next
This story presents a data-driven, visual, and narrative history of the field of artificial intelligence. I scraped the largest open database of AI research papers and analyzed the evolving language in the abstracts, then visualized the most salient trends in a series of charts.

I wanted to show the transformation of the field over its nearly 60 year existence to make the technology less mysterious to lay readers who may be approaching the subject for the very first time. Seeing how AI was created helps elucidate why it looks as it does today, and clarify how far its capabilities are from any true intelligence. It also illustrates the fact that it has always been the creation of humans, born from and shaped by a rich debate of ideas. In this way, it makes AI (and its decisions) feel more tangible than magical, more work-in-progress than perfect, and more contestable than final.

To date, the story has received 131,000 page views, with an average read time of 2 minutes. It was also shared widely on social media, with over 1,000 unique tweets from both by AI experts and lay readers. I also published the majority of the code on Github and the repo was starred 38 times and forked four.

Project links:

www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/

www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/