The New York Times is taking its fight against AI startup Perplexity to court, alleging the company produces content that is “verbatim or substantially similar” to its articles. The lawsuit, filed in New York federal court, claims Perplexity has been profiting from reproducing NYT work without permission. This move comes after months of cease-and-desist demands, highlighting the growing tension between traditional media and AI-driven platforms.
According to the NYT, Perplexity “unlawfully crawls, scrapes, copies, and distributes” content from its website. The lawsuit asserts that the AI startup bypassed technical protections like the robots.txt file, which signals which parts of a website are off-limits to automated crawlers. This claim points to a deliberate effort to access and use content despite legal and technical barriers.
Perplexity isn’t the first AI startup facing such claims. The Chicago Tribune filed a similar copyright lawsuit against the company, while the NYT itself sued OpenAI in December 2023. These legal battles reflect broader concerns over AI tools summarizing or directly reproducing content from established publishers without compensation.
Investigations by Forbes and Wired revealed that Perplexity had been skirting paywalls to provide AI-generated summaries of articles—and in some instances, nearly identical copies. The NYT lawsuit echoes these concerns, claiming that the startup has ignored restrictions meant to protect digital content. Such practices have triggered warnings from multiple publishers and heightened scrutiny over AI’s role in content distribution.
Perplexity has reportedly tried to mitigate tensions, launching programs aimed at moderating how its AI handles copyrighted content. Despite these efforts, the NYT argues that the startup continues to profit from material it did not produce, leaving litigation as the next step. The dispute underscores the challenges AI companies face in balancing innovation with intellectual property rights.
This lawsuit highlights the delicate line AI companies tread when using publicly available content. As media organizations push back against automated summarization and replication, startups may need to adopt stricter compliance measures or face significant legal consequences. The case also raises broader questions about how AI can ethically interact with copyrighted materials in the future.
The NYT’s legal action is likely to influence how AI answer engines operate moving forward. Companies like Perplexity may need to implement more robust safeguards to prevent copyright violations. Meanwhile, publishers will continue defending their content, seeking both financial restitution and legal clarity on the rights of AI platforms in the digital age.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.

Comments