What happened to New York’s landmark AI safety legislation—and why it no longer includes the safeguards many experts and parents demanded—comes down to a surprising alliance. In late December 2025, Governor Kathy Hochul signed a significantly softened version of the Responsible AI Safety and Education (RAISE) Act, despite the bill having passed both state legislative chambers months earlier with stronger oversight provisions. The rewrite followed intense lobbying from a coalition that included not just Silicon Valley giants, but also some of the nation’s top universities.
Originally, the RAISE Act aimed to hold developers of large AI models—like those from OpenAI, Anthropic, and Google—accountable by requiring detailed safety evaluations and mandatory incident reporting to the state attorney general. It was seen as a potential blueprint for national AI regulation. But the version signed into law omits critical enforcement mechanisms and replaces specific mandates with voluntary guidelines. Advocates say this drastically reduces its ability to prevent real-world harms, from biased hiring algorithms to deepfake-driven misinformation.
Behind the push to water down the bill was the AI Alliance, a coalition that includes Meta, IBM, Intel, Uber, and academic powerhouses like Stanford and MIT. The group spent an estimated $17,000–$25,000 on a targeted Meta ad campaign in December alone—reaching over two million New Yorkers—to frame the original bill as “unworkable” and harmful to innovation. In a June letter to state lawmakers, the alliance warned that strict requirements would stifle research and place undue burdens on developers, including university labs exploring frontier AI.
Critics argue this narrative conveniently overlooks the bill’s original intent: to protect the public, not punish innovation. Over 150 parents and advocacy groups had urged Hochul to sign the unaltered version, citing growing concerns about AI’s impact on children’s mental health, education, and privacy. “This wasn’t anti-tech—it was pro-accountability,” said one coalition organizer. “Yet universities that claim to prioritize ethics joined forces with corporations to gut it.”
The episode highlights a growing tension in AI governance: who gets to define “safety”? While tech companies and academic institutions tout responsible AI principles in public statements, their lobbying efforts often prioritize flexibility over enforceable standards. In New York’s case, the final law lacks teeth—no penalties for noncompliance, no clear timeline for safety assessments, and no independent oversight body. Instead, it leans on industry self-reporting, a model many watchdogs say has repeatedly failed in other tech domains.
What’s especially striking is the role of universities, traditionally seen as neutral grounds for ethical inquiry. Their alignment with Big Tech on this issue raises questions about the influence of corporate funding on academic policy positions. Several alliance-member schools receive millions in AI research grants from the very companies benefiting from the bill’s dilution. That blurring of lines between public interest and private interest has left many New Yorkers wondering whose safety the “safety bill” truly serves.
For now, New York’s revised RAISE Act stands as a cautionary tale for other states drafting AI legislation. Without robust public pressure and transparent policymaking, even well-intentioned laws risk being reshaped by the very entities they aim to regulate. As AI systems grow more powerful—and more embedded in daily life—the stakes for getting regulation right have never been higher. And in Albany, many feel that moment was just missed.
AI Safety Bill Weakened by Big Tech and Univ... 0 0 0 5 2
2 photos
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗳𝗶𝗻𝗱 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝗶𝗲𝘀.
From jobs and gigs to communities, events, and real conversations — we bring people and ideas together in one simple, meaningful space.

Comments