Sora is the latest AI-powered tool from OpenAI, so new that it’s only available to a few select researchers, academics and visual artists. This software generates highly realistic videos from short snippets of text, like “historical footage of California in the gold rush.” After typing in that description, presto! Sora generates a high-resolution video in a fraction of the time it would take a digital artist to create it and way faster than actually filming that scene on-site with actors, props, lighting and cameras.

This is not a hypothetical question or distant possibility; AI-generated content is already influencing voters. Although many state and federal lawmakers are scrambling to safeguard the upcoming election, a growing number of experts are sounding the alarm, warning that the U.S. is woefully unprepared for the growing threat from AI-generated propaganda and disinformation. In the 14 months since ChatGPT’s debut, this new AI technology is flooding the internet with lies, reshaping the political landscape and even challenging our concept of reality.

One cold morning in November 2020, I was running in Brooklyn with my friend James. We often talked about the various clubs that compose the competitive running scene we inhabit. Sometimes we shared gossip about who was switching to a new club, or which club was falling apart, or who was dating whom. On this particular run we wondered if one of New York’s most dominant clubs, West Side Runners, would still exist after the pandemic. 

Reporting often involves the presentation of conflicting information from different sources. When this happens, reporters must be diligent in verifying the facts and seeking multiple perspectives. Ultimately, the goal is to provide the context, evidence, and analysis needed for the audience to make an informed judgment. And in moments when information is hard to assess accurately, it is critical for reporters to explain to their readers why they couldn’t obtain that information.