
Google announced this week that it will add a digital watermark to images edited with its Magic Editor AI feature. This technology is specifically designed for images modified by the Reimagine feature on Pixel 9 devices, with the goal of making it easier for the public to identify AI-generated or edited content.
The line between AI image editing and reality is becoming increasingly blurred
Since the launch of Google’s Reimagine feature in 2024, generative AI image technology has taken photo editing to a whole new level. Similar to other features in Magic Editor, Reimagine is primarily used to enhance image quality, but the power of AI has also made image modification more extreme, potentially blurring the line between digital photos and fully AI-generated images.
As AI-generated images become more and more realistic, digital advocates have begun to call for a unified approach to help the public identify which photos are AI-created or modified. Among them, digital watermarking has become one of the popular solutions because it can be embedded in the image file without affecting the original picture.
SynthID: An invisible but identifiable AI watermarking technology
Google will use its DeepMind division’s SynthID technology to mark relevant images. This technology can “directly embed a digital watermark into AI-generated content without affecting its appearance.” Currently, SynthID can not only be used to scan images to detect watermarks, but it can also be applied to AI-generated text and video files.
Want to check the digital watermark of a photo? Users can click on “About this image” to view the metadata. However, Google also stated that not all edits will trigger SynthID. For example, if a user only changes the color of a small flower in the background, the change may not be marked or detected.
The launch of this feature is part of Google’s efforts to promote transparency in AI image editing. Google also emphasized that the AI principles they have published are the guidelines for such decisions.
How does the SynthID image watermark work?
The core technology of SynthID combines deep learning algorithms to embed digital watermarks into AI-generated images, videos, music, and text content without affecting the human visual or auditory experience.
- Images and Videos: The watermark is embedded directly into the pixels and can be detected even after cropping, filter changes, color adjustments, or compression.
- Music: Through spectrogram processing technology, the watermark is added to the audio waveform, making it detectable even after compression, noise reduction, or playback speed adjustment.
- Text: By adjusting the probability distribution of AI-predicted words, an invisible identification mark is embedded in the AI-generated text.
The core concept of these technologies is to ensure that AI-generated content can be identified without affecting the quality of the content, thereby enhancing information credibility.
What’s next for AI content labeling?
Currently, SynthID is in the testing phase and has been integrated into several of Google’s AI content generation tools, such as Imagen 3, ImageFX, and the latest AI video generation tool, Veo. In the future, this technology will be further expanded to more AI products to ensure the traceability of AI-generated content.
Although digital watermarking cannot solve all the problems of misleading information about AI content, it is indeed an important step towards AI transparency. With the development of AI technology, we may see more innovative technologies like SynthID that help people maintain trust and information reliability in the digital world.
Key takeaways
- Google is adding digital watermarks to AI-edited images to increase transparency.
- DeepMind’s SynthID technology can seamlessly embed watermarks into AI-generated content without affecting the original image.
- Users can check the watermark through metadata, but some minor modifications may not be marked.
- SynthID has been extended to AI-generated images, music, videos, and text, and will be applied to more Google AI tools in the future.
For technology companies and digital content creators, this technology may not only be a tool to prevent misinformation, but may also become a standard for AI content verification.


