A developer known as Aloshdenny claims to have reverse-engineered Google DeepMind's SynthID watermarking system, demonstrating how to extract watermark mechanics from AI-generated images using signal processing and sample data. Google disputes the claim, stating that SynthID remains robust and hasn't been systematically broken. The incident highlights potential vulnerabilities in AI content attribution systems and raises questions about watermarking as a safeguard against synthetic media misuse.
Safety
Has Google’s AI watermarking system been reverse-engineered?
A developer claims to have reverse-engineered Google DeepMind's SynthID watermarking system, potentially undermining AI content attribution as a safeguard against synthetic media misuse.
Tuesday, April 14, 2026 12:00 PM UTC2 MIN READSOURCE: The VergeBY sys://pipeline
Tags
safety