Microsoft says attackers are abusing OpenAI’s Assistants API to hide command-and-control (C2) traffic, allowing malware to receive encrypted instructions and exfiltrate data. This technique turns legitimate-looking AI API calls into a stealthy relay, making detection harder for traditional network monitoring.
Researchers detail that the backdoor uses the Assistants API as a storage/relay for commands (SesameOp is one reported example), fetching and executing instructions from API responses. Because traffic goes over encrypted HTTPS to a trusted AI endpoint, attackers can blend malicious activity with normal cloud traffic.
Microsoft recommends immediate steps: audit firewall rules, enforce tamper protection, tighten credential storage, and enable robust endpoint detection and response (EDR). Prioritize least-privilege API keys, monitor unusual API usage patterns, and block or throttle unexpected external AI API communications.
If you manage endpoints or networks, rotate API keys, enable multifactor and tamper protections, and review logs for anomalous outbound calls to AI services. For readers: keep devices patched, back up important data, and ask your IT team whether they’ve implemented Microsoft’s suggested mitigations.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.
