Dozens of live feeds from Flock Safety’s AI-powered surveillance cameras were recently found completely exposed on the open web—no username, no password, just a clickable link. According to a joint investigation by tech YouTuber Benn Jordan and 404 Media, anyone with the right URL could watch real-time video from over 60 Flock Condor cameras, raising urgent questions about privacy, oversight, and the risks of AI-driven public surveillance.
The exposed feeds didn’t just offer static street views. Flock’s Condor cameras use advanced AI to pan, tilt, and zoom automatically, tracking people and vehicles in real time. In one chilling example, Jordan described watching a man leave his New York home in the morning. In another, he observed a woman jogging alone on a Georgia forest trail—followed by multiple cameras. The system even zoomed in on a man watching rollerblading videos on his phone during a break. At a street market in Atlanta, the AI locked onto a couple in the middle of an argument. These weren’t grainy, distant shots—they were intimate, detailed, and utterly unsecured.
The breach was uncovered using Shodan, a specialized search engine that indexes internet-connected devices. Working alongside cybersecurity expert Jon “GainSec” Gaines—who previously flagged Flock vulnerabilities—Jordan identified not only live video streams but also unprotected administrator dashboards. These panels granted full control: downloading 30-day video archives, adjusting camera settings, deleting footage, and accessing system logs. The implications go beyond voyeurism; bad actors could erase evidence or manipulate surveillance data with ease.
Flock Safety partners with thousands of law enforcement agencies across the U.S. and recently integrated with Ring’s Neighbors app, letting police request user-submitted footage. While marketed as a tool to “solve and prevent crime,” this leak reveals how fragile the security behind these systems can be. When AI-powered cameras watching residential neighborhoods, parks, and commercial areas are left wide open, the line between public safety and unwarranted surveillance blurs dangerously.
Flock has since taken down the exposed feeds and issued a statement acknowledging the issue, attributing it to a “misconfigured deployment” by a third-party contractor. However, this isn’t the first time Flock’s security has been called into question. Critics argue that as AI surveillance expands, oversight and transparency haven’t kept pace. With real-time tracking of everyday citizens now just a link away, many are asking: who’s watching the watchers?
Digital rights organizations warn this incident is a symptom of a larger trend: the rapid rollout of smart surveillance without adequate safeguards. “When cameras can identify, follow, and record individuals without consent—and then leave that data exposed—it’s not just a bug, it’s a systemic failure,” said one privacy researcher. As cities and police departments deepen their reliance on private surveillance networks, incidents like this could erode public trust in both tech companies and public safety institutions.
The Flock leak arrives at a pivotal moment in 2025, as lawmakers debate AI regulation and public awareness of digital privacy grows. This breach underscores a harsh reality: even well-funded, law enforcement–backed systems are vulnerable to basic security oversights. For everyday people captured on these feeds—walking dogs, returning home, or simply existing in public—the violation feels deeply personal. As AI cameras become more autonomous and ubiquitous, ensuring they’re also secure isn’t optional—it’s essential.
Flock AI Camera Feeds Exposed Online Without ... 0 0 0 6 2
2 photos
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗳𝗶𝗻𝗱 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝗶𝗲𝘀.
From jobs and gigs to communities, events, and real conversations — we bring people and ideas together in one simple, meaningful space.

Comments