Hackers are exploiting the growing popularity of AI developer tools to spread dangerous infostealers, putting software developers’ data and credentials at risk. Security researchers have uncovered a malvertising campaign disguised as legitimate downloads for Claude Code, a coding-focused AI assistant by Anthropic. This alert comes as more developers turn to AI tools to streamline coding, debug software, and collaborate on projects.
Security firm Kaspersky recently highlighted that fake websites claiming to offer Claude Code downloads are delivering malware instead. On Windows systems, the Amatera infostealer is being installed, while macOS users are at risk of the AMOS infostealer. These malicious programs can silently collect sensitive information, including source code, login credentials, and other confidential corporate data.
The campaign is particularly concerning because it leverages the trust developers place in AI tools. As Claude Code and similar platforms gain traction, hackers are exploiting that trust to lure victims into downloading malicious software disguised as legitimate AI assistants.
Claude Code for Enterprise and Teams is designed to help developers write, edit, and debug code efficiently. The tool extends the capabilities of Anthropic’s Claude GenAI chatbot, focusing on coding tasks similar to those offered by GitHub Copilot or ChatGPT’s coding features. Because developers often handle proprietary projects and sensitive corporate data, attackers see Claude Code users as lucrative targets for stealing intellectual property and credentials.
Fake download links exploit this by appearing professional and trustworthy, often using official-sounding names or design elements mimicking the legitimate software. Once executed, infostealers can transmit confidential information to remote servers controlled by hackers, leaving developers and organizations exposed to data breaches and financial loss.
Developers can reduce the risk of falling victim by learning to identify suspicious download sources. Some common warning signs include:
Download sites not linked from official channels.
Requests for unusual permissions or system access.
Lack of reviews, security certificates, or trust indicators on the website.
File names or extensions that differ slightly from official releases.
Being cautious and verifying download sources through official websites or trusted marketplaces is essential to prevent malware infections.
Security experts advise developers to use robust endpoint protection and multi-factor authentication to secure accounts. Regularly updating software and scanning downloads before installation can mitigate risk. Organizations should also educate employees about malvertising campaigns and the tactics hackers use to exploit current trends in AI and coding tools.
This wave of infostealers highlights a broader trend: cybercriminals increasingly follow technological trends to find vulnerable targets. As AI development tools continue to grow in popularity, developers must stay vigilant, balancing the convenience of AI coding assistants with proactive security practices.
Cybersecurity is an ongoing challenge, especially as threat actors adapt quickly to new technologies. By remaining informed about current threats, verifying download sources, and implementing strong security measures, developers can safely harness AI tools like Claude Code without compromising sensitive information. Vigilance and awareness are key to protecting both personal credentials and corporate data from evolving malware campaigns.

Comment