Anthropic revealed that Chinese state-backed hackers leveraged its AI model Claude to automate nearly 30 cyberattacks in September. The attacks targeted corporations and government entities, with 80–90% of the process running automatically. Anthropic’s head of threat intelligence, Jacob Klein, described it as “literally with the click of a button,” requiring minimal human intervention. This raises urgent concerns about the growing role of AI in cybercrime.
The hackers used Claude to manage multiple hacking tasks, including data exfiltration and command execution. Humans only intervened at critical decision points, approving or halting actions. Such AI-assisted automation represents a significant escalation compared to previous cyberattacks, showing that AI can now handle complex operations with minimal oversight.
AI-powered hacking is becoming more common, and this campaign highlights how advanced state-backed attackers have become. Google recently reported Russian hackers using large-language models for malware commands, demonstrating that Anthropic’s Claude is part of a growing trend of AI-driven cyber threats. Experts warn that automation could increase attack speed, scale, and sophistication.
While Anthropic confirmed that sensitive data from four victims was stolen, the U.S. government was not compromised. AI cybersecurity experts emphasize proactive monitoring, AI threat detection tools, and international cooperation to prevent state-sponsored attacks. Companies using AI models must strengthen security and ensure responsible deployment to mitigate risks.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.

Comments