Coronavirus Tracking

    • The New York Times
    • June 20 - Dec 20
    • Data reporting
    • Web scraping
    • (plenty of) Google sheets
    • Node.js

Filling a gap in our understanding of the pandemic.

My work focused on tracking coronavirus clusters in colleges, nursing homes and other hot spots. I also worked on the covid data acquisition team part-time, mostly fixing the 300+ web scrapers that feed the Times' U.S. covid database using Node.js. Otherwise, I worked on a team of about two dozen journalists fact-checking data from scraped state long-term care reports or collecting the data myself through communication with state and local health departments. We updated data on 1,800+ colleges one row at a time, every day. For some schools, this meant checking public covid dashboards, for others it meant emails, phone calls and public records requests.

Adapting to the evolving nature of the pandemic, I also spearheaded some of our workflow improvements including the design of the system dozens of reporters now use to organize thousands of source files across all of cluster tracking efforts. These projects fill a huge gap in our understanding of the pandemic that would otherwise go unchecked, and our data are used widely by researchers and others to inform public health decisions.