New safety feature in Google Messages rolls out which detects and blurs explicit images
The new safety setting blurs explicit photos and works entirely on-device.

Google is now widely rolling out its Sensitive Content Warnings in Messages for Android, a safety feature designed to detect and blur nude images before you view or send them.
The feature works by processing and classifying images entirely on-device through Android System SafetyCore. According to Google, no identifiable data or the classified content itself is sent to the company’s servers, and users must be signed in to their Google Account in Messages for the tool to function.
When a blurred image is detected, you can choose to:
- Learn why nude images can be harmful
- Block the sender's number
- View the image after confirming your choice
- Return to the conversation without opening it
The system also issues a prompt when users attempt to send or forward a nude image. Additionally, users will be reminded of the risks and must confirm before the message goes through.
For adults (18+), the feature is turned off by default but can be enabled via Google Messages Settings > Protection & Safety > Manage sensitive content warnings > Warnings in Google Messages. The rules differ for younger users: supervised accounts cannot turn it off without parental control via Family Link, while unsupervised teens aged 13–17 can disable it in their Google Account settings.
Settings to manage sensitive content warnings. | Images credit — 9to5Google
Google first announced this feature in October 2024, with an initial rollout to beta testers beginning in April. However, it is now available more broadly with the stable release of Google Messages and Play services.
Apple introduced a similar system called Communication Safety in iMessage, which blurs sexually explicit images for children’s accounts and provides safety resources. Like Google's approach, Apple's detection also happens on-device, aiming to protect privacy while adding an extra layer of safety. However, Apple’s version is primarily aimed at minors, whereas Google’s covers both adult and teen users, with different default settings based on age.
On one hand, Google’s Sensitive Content Warnings could help reduce harmful or unwanted exposure, especially for younger users. Having the detection happen on-device with no image data sent to servers should also help ease privacy concerns. On the other hand, some users may find the prompts intrusive, particularly in adult conversations where consent is already established.
The fact that adults must enable the feature manually might also limit its adoption. That said, this rollout targets a real problem that needs a solution — particularly when it comes to minor. If having to tweak your settings or put up with some annoying prompts are the price to pay, it's up to each individual to decide if it's worth it.
Things that are NOT allowed:
To help keep our community safe and free from spam, we apply temporary limits to newly created accounts: