Google Photos might get a new way to show if images were made or edited with AI

AI and non-AI edits could be flagged directly in the photo details view

0comments
Smartphone with the Google Photos app logo displayed
Google Photos could soon make it easier to see if a photo or video has been altered with AI tools. This unreleased feature, discovered through references in a breakdown of the app's code, will attempt to mitigate the issue of not being able to properly identify when a photo or video is real or a deepfake.

The discovery was made in version 7.41 of the app, where code references a feature called "threepio." It would add a new "How was this made" section to the photo details view. It would involve swiping up on a photo or video, where users could see details about how it was created or edited.

Have you experienced not being able to identify if an image or video is AI-generated?


The labels it might include are:

  • "Media created with AI"
  • "Edited with AI tools"
  • "Edited with multiple AI tools"
  • "Edited with non-AI tools"
  • "Media captured with a camera without software adjustments"

It may also detect when multiple editing tools were used or when several images were combined. Additionally, if the file’s edit history is missing or has been changed, Google Photos would show an error message instead.

According to the source, this functionality appears to be powered by Content Credentials, a system that attaches a persistent history of edits to a file. That information stays with the media even when shared, unless it’s removed.

The idea isn’t entirely new for Google. The company has already developed SynthID, a DeepMind project that invisibly watermarks AI-generated images. While it’s unclear if SynthID is being used here, both approaches aim to give people more context about the origins of visual content.


Other companies have been working on similar solutions. Adobe’s Content Authenticity Initiative tracks edits in image metadata, while Meta has committed to labeling AI-generated images across Facebook and Instagram. Together, these projects show that the tech industry sees transparency around AI edits as increasingly important.

If Google releases this feature with Google Photos, it could be a useful tool for quickly checking whether photos and videos are authentic. While the irony is not lost on me that this is the same company that developed tools to generate very realistic AI images and videos, I can see how having this could be valuable in areas like journalism, education, and online sales, where trust matters. Additionally, including such a tool in one of the most widely used photo apps could set an example that others follow.

Grab the Galaxy S25 + 2 Yrs Unlimited – only $30/mo from Mint Mobile

With Galaxy AI – port-in & $720 upfront required


We may earn a commission if you make a purchase

Check Out The Offer
Loading Comments...

Recommended Stories

FCC OKs Cingular\'s purchase of AT&T Wireless