Chicago Sun-Times Publishes Fake AI-Generated Books and Experts: What Happened?
If you’re searching for details on the Chicago Sun-Times AI scandal, you’ve come to the right place. The widely-read newspaper recently published a summer reading list and lifestyle articles featuring entirely fabricated books and nonexistent experts. This AI-generated content, slipped past editorial checks, has raised serious questions about journalistic integrity and the use of artificial intelligence in media. Users want to know: How did fake AI-written books and fake expert quotes get into a trusted publication’s print edition? What does this mean for news credibility and AI content regulation? Here’s a clear breakdown of the incident and its implications.
Fake Books and Nonexistent Experts Flood the Summer Reading Guide
The May 18th Chicago Sun-Times issue included a summer activities guide packed with trendy book recommendations, outdoor tips, and food trends. However, many of the books featured were entirely fabricated. Real authors like Min Jin Lee and Rebecca Makkai were credited with titles that do not exist, such as "Nightshade Market" and "Boiling Point," deceiving readers and damaging trust. The list mixed genuine titles like André Aciman’s Call Me By Your Name with these AI-invented works, making the deception harder to detect.
Similarly, lifestyle articles attributed quotes to supposed experts who don’t appear to exist—like “Dr. Jennifer Campos, professor of leisure studies,” and “Dr. Catherine Furst, food anthropologist at Cornell University.” Even celebrity quotes, such as those credited to Padma Lakshmi, were misrepresented. This blend of AI-generated misinformation erodes confidence in the editorial process and highlights risks tied to automated content creation.
Sun-Times Response and Editorial Accountability
The Chicago Sun-Times quickly acknowledged the blunder on social media, stating the content was neither created nor approved by their newsroom. Victor Lim, senior director of audience development, called the incident “unacceptable” and promised a thorough investigation. While the guide prominently features the Sun-Times logo, it remains unclear whether this content was sponsored or a result of external third-party involvement.
Marco Buscaglia, a writer linked to some of the suspect pieces, admitted to relying on AI for background material but failed to properly verify the facts this time, describing the oversight as “completely embarrassing.” This case mirrors a growing trend where news outlets publish AI-generated content alongside genuine journalism, sometimes blaming third-party vendors when errors occur. However, the damage to reputation and reader trust is significant.
Why This Matters: The Growing Risk of AI-Generated Misinformation
As AI tools become more common in content creation, the risk of fabricated information slipping into mainstream media increases. This incident at the Chicago Sun-Times joins a string of controversies, including similar problems at Gannett and Sports Illustrated, where AI “sludge” content created by marketing firms ran alongside real news.
Publishers must implement stronger editorial oversight, fact-checking, and transparency around AI-generated content. Without these safeguards, audiences risk exposure to fake experts, false facts, and misleading narratives—threatening the foundation of trustworthy journalism.
Semasocial is where real people connect, grow, and belong.
We’re more than just a social platform — we’re a space for meaningful conversations, finding jobs, sharing ideas, and building supportive communities. Whether you're looking to join groups that match your interests, discover new opportunities, post your thoughts, or learn from others — Semasocial brings it all together in one simple experience.
From blogs and jobs to events and daily chats, Semasocial helps you stay connected to what truly matters.