Sports Illustrated‘s AI Scandal: The Fall of a Journalism Giant

In April 2023, the iconic sports magazine Sports Illustrated found itself at the center of a media firestorm when it was revealed that the publication had been using artificial intelligence to generate articles and create fake author profiles. The scandal, exposed by tech news site Futurism, has raised serious questions about the ethics of AI in journalism and the importance of transparency in an era of rapidly advancing language models.

The Exposé: Uncovering a Web of AI Deception

Futurism‘s investigation into Sports Illustrated began when the outlet noticed unusual patterns in the magazine‘s content. "We started to see articles that followed a very specific formula," explained Futurism editor-in-chief Jon Christian in an interview with The Guardian. "They would introduce a topic, provide a few bullet points, and then end with a call-to-action or product recommendation. It read like something generated by a machine."

Digging deeper, Futurism discovered that many of these formulaic articles were attributed to author profiles that seemed to exist only on Sports Illustrated‘s website. Reverse image searches of the author headshots revealed that they were synthetic images created using AI tools like Midjourney or Stable Diffusion. The author bios were also suspicious, often including irrelevant personal details or clichéd phrases.

For example, an article titled "The 10 Best Running Shoes for Flat Feet" was attributed to "Drew Ortiz," a supposed fitness enthusiast who loved hiking and camping in his free time according to his bio. However, no record of a journalist named Drew Ortiz could be found anywhere else online, and his author photo was confirmed to be an AI-generated face.

Sports Illustrated Fake Author Profile
An example of a fake Sports Illustrated author profile generated by AI. (Source: Futurism)

As Futurism continued its investigation, the scale of Sports Illustrated‘s AI operation became clear. The outlet identified over 2,000 articles across various verticals like sports equipment, health and fitness, and betting that appeared to be wholly or partially generated by AI. These articles were attributed to a rotating cast of fake AI authors, each with their own made-up backstory and byline.

The Fallout: Sports Illustrated‘s Response and Industry Reactions

Once Futurism‘s story broke, Sports Illustrated scrambled to contain the damage. In a statement, the company claimed that the AI-generated content was the result of a rogue partnership with a third-party vendor called AdVon. "We were unaware of the scope of AI-generated content being published and are taking immediate steps to address the issue," said Sports Illustrated CEO Ross Levinsohn.

However, many questioned how such a large-scale deception could have gone unnoticed by Sports Illustrated‘s editorial leadership. "It stretches credulity to believe that SI had no idea this was happening," remarked Bill Adair, a journalism professor at Duke University. "You‘re telling me no editor ever googled one of their writers or thought it was strange that an author‘s bio changed every few months?"

The scandal sparked outrage among journalists and media watchdogs, who condemned Sports Illustrated for misleading readers and compromising journalistic integrity. "This is a betrayal of trust," said Kelly McBride, senior vice president at the Poynter Institute. "Readers expect that the bylines on articles represent real people who stand behind their work. Using fake AI authors is a form of deception that undermines the credibility of journalism."

Other industry leaders expressed concerns about the growing use of AI in publishing and the need for clear standards around disclosure and transparency. "If you‘re using AI to generate content, you have an obligation to clearly label it as such," argued Rishad Tobaccowala, former chief growth officer at Publicis Groupe. "Trying to pass off AI as human writing is not only unethical, it‘s a losing strategy in the long run."

The Bigger Picture: AI‘s Disruption of Journalism

The Sports Illustrated scandal is just one example of how artificial intelligence is rapidly transforming the media landscape. In recent years, advances in natural language processing have enabled AI models to generate increasingly sophisticated text, from news articles and blog posts to scripts and poetry. Some publications have begun experimenting with using AI to assist with tasks like data analysis, story templates, and even drafting entire articles.

However, the rise of AI in journalism has also raised concerns about job displacement, content quality, and the erosion of public trust. A 2021 study by the Pew Research Center found that 65% of Americans believe that the increasing use of AI in news will lead to a decrease in the accuracy and reliability of information.

Increase Decrease No impact
Accuracy 12% 65% 22%
Reliability 14% 63% 23%

Pew Research Center survey on American attitudes toward AI in news.

As generative AI tools become more powerful and accessible, experts say it is critical for media organizations to develop clear guidelines and best practices for their use. "AI can be a valuable tool for journalists, but it‘s not a replacement for human judgment, expertise, and ethics," said Anjana Susarla, a professor of responsible AI at Michigan State University. "Publishers need to be transparent about how they are using AI and ensure that there is human oversight and accountability at every stage of the process."

Some outlets, like the Associated Press and The Guardian, have begun experimenting with AI while implementing strict disclosure policies. Articles that utilize AI are clearly labeled as such, and human editors review all machine-generated content before publication. Other organizations, such as the BBC and NPR, have launched dedicated teams to study the implications of AI for journalism and develop internal guidelines.

However, the Sports Illustrated debacle demonstrates the risks of using AI without adequate safeguards and transparency. By covertly deploying AI at scale and attributing machine-written content to fake personas, the magazine not only deceived its audience but also undermined the fundamental tenets of journalistic integrity.

The Path Forward: Transparency, Standards, and Accountability

As the media industry grapples with the opportunities and challenges posed by artificial intelligence, experts say that transparency, ethical standards, and accountability will be key to maintaining public trust.

"The worst thing publications can do is try to hide their use of AI or pass it off as human-generated content," said Kelly McBride of the Poynter Institute. "Whenever AI is used to produce publishable content, it should be clearly disclosed to readers. There needs to be a human byline taking responsibility for the work."

Industry groups and academic institutions are also working to establish best practices and guidelines for the responsible use of AI in journalism. The Partnership on AI, a consortium of technology companies and media organizations, has convened a working group to develop recommendations around transparency, accountability, and oversight of AI-generated content.

Meanwhile, journalism schools are beginning to incorporate AI ethics and literacy into their curricula. "We need to train the next generation of journalists to understand both the capabilities and limitations of AI," said Meredith Broussard, a data journalism professor at New York University. "They need to know how to use these tools in a way that enhances their reporting while upholding the highest standards of accuracy, fairness, and integrity."

Ultimately, the path forward for AI in journalism will require a commitment to transparency, human judgment, and the unique value proposition of original, expert reporting. While AI can assist and augment the work of journalists, it cannot replace the essential human skills of curiosity, critical thinking, and storytelling.

As The Atlantic‘s Ethan Zuckerman writes, "The rise of AI should be a reminder of what makes journalism valuable in the first place. It‘s not just about producing content at scale, but about providing context, analysis, and accountability. It‘s about asking tough questions, uncovering hidden stories, and earning the trust of readers through honest, independent reporting."

The Sports Illustrated scandal may have exposed the pitfalls of AI gone awry, but it also underscores the enduring importance of human-driven journalism in the age of algorithms. By embracing transparency, ethics, and editorial oversight, news organizations can harness the power of AI while preserving the integrity and impact of their work. The future of media depends on it.