A developer claiming the username Aloshdenny just published what they say is a working method to strip watermarks from Google DeepMind’s SynthID system — the watermarking tech built into Gemini and other Google AI models. Google disputes the claim. But the demonstration raises a real question: how much trust should we place in watermarking as a defense against AI-generated content?
What SynthID Does (And What It’s Supposed to Stop)
SynthID embeds imperceptible watermarks into AI-generated images during the generation process itself. The watermark survives light editing — crops, compression, color shifts — and can be detected to prove an image came from Google’s models. It’s one of the few technically sound approaches to the “AI-generated content” detection problem, because it doesn’t rely on post-hoc analysis of suspicious patterns. The watermark is baked in from the start.
In theory, this makes it harder to:
- Pass off AI-generated images as human-created work
- Distribute AI images while hiding their origin
- Claim an AI image is real when it isn’t
On paper, it’s solid. In practice, according to Aloshdenny’s published work, it took 200 Gemini images, basic signal processing, and patience.
The Claimed Method: Averaging and Pattern Extraction
Aloshdenny’s approach, detailed publicly on Medium and GitHub, bypassed neural networks entirely. Instead, the developer averaged multiple AI-generated images to isolate the repeating watermark pattern, then extracted and analyzed it using signal processing. Once the pattern was isolated, they claim the method can either remove the watermark from existing images or insert it into images that never came from Google’s models.
The simplicity is the concerning part. No proprietary access. No machine learning. The developer described the process as requiring “way too much free time” — not cutting-edge technical capability.
Google’s response was direct: the claim isn’t accurate. A Google spokesperson stated that the demonstrated method doesn’t actually extract or insert SynthID watermarks. Without seeing the full technical details of Google’s rebuttal, it’s hard to assess whether this is a credibility issue or a genuine misunderstanding of what was claimed.
Why This Matters for Watermarking as Defense
Whether or not Aloshdenny’s specific method works, the incident surfaces a real vulnerability in any watermarking system: if the watermark pattern is deterministic and consistent across many images, statistical analysis becomes a viable extraction tool. This is a known problem in digital watermarking research — most academic watermarking work assumes the watermark pattern itself remains secret. Once it’s reverse-engineered, the security collapses.
For SynthID specifically, Google likely has to balance two competing demands: the watermark must be robust enough to survive common image edits (crops, compression, noise), but that same robustness makes it harder to keep the pattern itself hidden if an attacker has enough samples.
What This Doesn’t Solve (And Still Needs To)
Even if watermark removal or insertion were trivial, it wouldn’t solve the core problem of detecting AI-generated images without a watermark. A bad actor could simply use a different model — Claude, Midjourney, DALL-E, any open-source Stable Diffusion variant. Watermarking only works if most AI image generation eventually includes compatible watermarking. That requires industry coordination, which doesn’t exist.
The real value of watermarking isn’t perfect detection. It’s attribution — proving where an image came from when one does exist.
Do This Today: Test Your Watermark Assumptions
If you’re building systems that rely on watermark detection as a trust signal, start investigating whether the watermark implementation you depend on has published security analysis. Check if the researchers published their methodology openly. Ask the model provider directly: what’s the threat model, and has it been tested against extraction attacks? Don’t assume watermarking is unbreakable — assume it’s one layer of a larger verification strategy, because it is.