Tech

Google’s SynthID is the latest tool for catching AI-made content. what is AI ‘watermarking,’ and does it work?

Share
Share
Google
Credit: Unsplash/CC0 Public Domain

Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio.

But there are some caveats. One of them is that the tool is currently only available to “early testers” through a waitlist.

The main catch is that SynthID primarily works for content that’s been generated using a Google AI service—such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio.

If you try to use Google’s AI detector tool to see if something you’ve generated using ChatGPT is flagged, it won’t work.

That’s because, strictly speaking, the tool can’t detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a “watermark” that Google’s AI products (and a couple of others) embed in their output through the use of SynthID.

A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media.

SynthID embeds watermarks in the output from AI models. The watermarks are not visible to readers or audiences, but can be used by other tools to identify content that was made or edited using an AI model with SynthID on board.

SynthID is among the latest of many such efforts. But how effective are they?

There’s no unified AI detection system

Several AI companies, including Meta, have developed their own watermarking tools and detectors, similar to SynthID. But these are “model specific” solutions, not universal ones.

This means users have to juggle multiple tools to verify content. Despite researchers calling for a unified system, and major players like Google seeking to have their tool adopted by others, the landscape remains fragmented.

A parallel effort focuses on metadata—encoded information about the origin, authorship and editing history of media. For example, the Content Credentials inspect tool allows users to verify media by checking the edit history attached to the content.

However, metadata can be easily stripped when content is uploaded to social media or converted into a different file format. This is particularly problematic if someone has deliberately tried to obscure the origin and authorship of a piece of content.

There are detectors that rely on forensic cues, such as visual inconsistencies or lighting anomalies. While some of these tools are automated, many depend on human judgment and common sense methods, like counting the number of fingers in AI-generated images. These methods may become redundant as AI model performance improves.






How effective are AI detection tools?

Overall, AI detection tools can vary dramatically in their effectiveness. Some work better when the content is entirely AI-generated, such as when an entire essay has been generated from scratch by a chatbot.

The situation becomes murkier when AI is used to edit or transform human-created content. In such cases, AI detectors can get it badly wrong. They can fail to detect AI or flag human-created content as AI-generated.

AI detection tools don’t often explain how they arrived at their decision, which adds to the confusion. When used for plagiarism detection in university assessment, they are considered an “ethical minefield” and are known to discriminate against non-native English speakers.

Where AI detection tools can help

A wide variety of use cases exist for AI detection tools. Take insurance claims, for example. Knowing whether the image a client shares depicts what it claims to depict can help insurers know how to respond.

Journalists and fact checkers might draw on AI detectors, in addition to their other approaches, when trying to decide if potentially newsworthy information ought to be shared further.

Employers and job applicants alike increasingly need to assess whether the person on the other side of the recruiting process is genuine or an AI fake.

Users of dating apps need to know whether the profile of the person they’ve met online represents a real romantic prospect, or an AI avatar, perhaps fronting a romance scam.

If you’re an emergency responder deciding whether to send help to a call, confidently knowing whether the caller is human or AI can save resources and lives.

Where to from here?

As these examples show, the challenges of authenticity are now happening in real time, and static tools like watermarking are unlikely to be enough. AI detectors that work on audio and video in real time are a pressing area of development.

Whatever the scenario, it is unlikely that judgments about authenticity can ever be fully delegated to a single tool.

Understanding the way such tools work, including their limitations, is an important first step. Triangulating these with other information and your own contextual knowledge will remain essential.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Google’s SynthID is the latest tool for catching AI-made content. what is AI ‘watermarking,’ and does it work? (2025, June 3)
retrieved 3 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
How light is replacing Wi-Fi indoors
Tech

How light is replacing Wi-Fi indoors

System overview, illustrating that infrared radiative element clusters arranged on the ceiling...