Cybersecurity experts are raising alarms after a surge in attacks on top Large Language Model (LLM) services, including OpenAI and Gemini. Hackers are increasingly exploiting misconfigured proxies to probe AI systems, revealing a worrying trend in AI security. Between October 2025 and January 2026, GreyNoise recorded over 91,000 attack sessions targeting exposed AI endpoints, uncovering two major hacking campaigns. This activity coincided with the Christmas 2025 period, highlighting that cybercriminals exploit even holiday downtimes to target critical AI infrastructure.
The first wave of attacks focused on tricking LLM servers into connecting to attacker-controlled systems. Hackers exploited features like webhooks and model downloads to force servers to “phone home” without alerting owners. These callbacks allowed attackers to confirm whether underlying AI systems were vulnerable, providing a foothold for further exploitation. Experts warn that this approach demonstrates not just technical skill but also a growing understanding of LLM architectures, making even minor misconfigurations a serious threat.
A second campaign revealed a more systematic method. GreyNoise observed two IP addresses repeatedly probing AI endpoints tens of thousands of times. Instead of immediate exploitation, attackers sent simple queries like “How many states are there in the US” to identify which AI models were active. This mapping process helped hackers understand configurations and accessibility without triggering alarms. Researchers emphasize that this low-level reconnaissance can lay the groundwork for future, more sophisticated attacks on AI platforms.
Misconfigured proxies remain one of the most overlooked vulnerabilities in AI deployments. Many organizations unintentionally leave proxies open or improperly configured, giving attackers easy access to LLM systems. Once a proxy is exposed, even basic hacking scripts can start probing for weaknesses. GreyNoise’s study shows that these vulnerabilities are not theoretical; thousands of real-world attack sessions exploit them every month, putting sensitive AI operations at risk.
As LLMs increasingly integrate into businesses, healthcare, and other critical sectors, these security gaps carry higher stakes. Hackers exploiting misconfigured proxies can potentially access proprietary data, manipulate outputs, or even launch automated cyberattacks. Experts advise organizations to audit proxy configurations, implement strict access controls, and continuously monitor AI endpoints to mitigate emerging threats. Ignoring these risks could result in not only financial losses but also reputational damage.
GreyNoise’s proactive research underscores the importance of ethical hacking and real-time monitoring. By setting up fake AI systems, researchers can identify attack patterns and expose vulnerabilities before they are exploited in the wild. This method provides valuable insights for AI developers and organizations, helping them patch weak points and strengthen defenses. Industry observers stress that continuous AI security research is critical as LLMs become more widespread and integral to digital infrastructure.
With AI adoption growing rapidly, the threat landscape is evolving equally fast. Hackers are not just targeting individual systems—they are learning how to exploit AI logic and architecture at scale. Organizations must treat LLM security as a top priority, combining robust infrastructure, vigilant monitoring, and employee awareness. While no system is invulnerable, proactive defenses against misconfigured proxies and other attack vectors can significantly reduce risks, ensuring AI tools remain both powerful and safe.


Array