Google has expanded its Gemini app to allow users to verify if videos were made or edited using Google AI. Many content creators and consumers have been asking, “Was this video generated using AI?” Now, Gemini can answer that question specifically for Google AI-generated content. This move builds on the app’s existing image verification feature, giving users more confidence in spotting AI-generated media.
Gemini identifies AI-generated videos by scanning for Google’s proprietary watermark, called SynthID. This watermark is embedded in both visuals and audio, making it detectable even in short clips. Users won’t just get a simple yes or no; Gemini highlights exact timestamps where the watermark appears, giving a detailed map of AI-generated segments.
While some AI watermarks can be removed or altered, Google claims its SynthID watermark is “imperceptible.” However, it remains unclear how resilient it is against sophisticated editing tools. Platforms like OpenAI’s Sora app have shown that AI-generated content can slip through detection, highlighting the ongoing challenge of verifying AI media across multiple platforms.
Gemini can handle videos up to 100 MB in size and 90 seconds in duration for verification. This range covers most social media clips and short-form content, making it practical for everyday users. The verification feature is available in every language and location where Gemini is offered, ensuring global accessibility.
The rise of deepfakes and AI-edited content has made verification increasingly important. With Gemini, users can quickly assess whether a video was generated or altered using Google AI, helping to prevent misinformation and maintain trust in digital media. While it’s limited to Google AI content for now, it marks a significant step toward broader AI media accountability.
Google’s approach combines watermarks with metadata through models like Nano Banano, embedding C2PA information for transparency. However, the lack of universal AI content tagging means many deepfakes may remain undetected on other platforms. Experts suggest a coordinated system across social media networks could be the next step for effective AI verification.
By enabling video verification, Gemini strengthens Google’s toolkit for responsible AI deployment. It gives creators and viewers alike more control and clarity over AI-generated content. As AI content grows in popularity and complexity, features like Gemini’s video check will likely become standard in media verification apps.
Users can access the verification feature directly through the Gemini app. Simply upload a video and ask, “Was this generated using Google AI?” The app will analyze the content and provide a timestamped report of any detected SynthID watermarks. This ease of use ensures both casual users and professionals can quickly verify AI-generated content.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗳𝗶𝗻𝗱 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝗶𝗲𝘀.
From jobs and gigs to communities, events, and real conversations — we bring people and ideas together in one simple, meaningful space.

Comments