Meta is cashing in billions on the scam ads you keep seeing on Facebook and Instagram
Billions of risky ads run every year while users stay exposed to fraud.
Last year, Meta estimated that scam ads could make up as much as 10% of its revenue – that is roughly $16 billion.
A new report shines a light on just how big the scam ad issue is across Meta’s platforms, and why the company hasn’t really gotten it under control. According to documents seen by Reuters, the $16 billion figure comes from ads for fraudulent e-commerce and investment schemes, illegal online casinos, and banned medical products.
Researchers at Meta even estimate that its apps were involved in a third of all successful scams in the US.
The documents, some never reported before, show that for at least three years, Meta failed to stop a flood of scam ads exposing billions of Facebook, Instagram, and WhatsApp users to fraud.
A look behind the curtain at Meta’s scam ad problem
A new report shines a light on just how big the scam ad issue is across Meta’s platforms, and why the company hasn’t really gotten it under control. According to documents seen by Reuters, the $16 billion figure comes from ads for fraudulent e-commerce and investment schemes, illegal online casinos, and banned medical products.
The documents, some never reported before, show that for at least three years, Meta failed to stop a flood of scam ads exposing billions of Facebook, Instagram, and WhatsApp users to fraud.
One internal report from December 2024 notes that users were shown an estimated 15 billion “higher risk” scam ads per day – ads with clear signs of fraud. Meta made around $7 billion in annualized revenue from these scam ads alone.
Apparently, if the system is less certain but still suspects foul play, Meta doesn’t block the ad – instead, it charges higher rates as a “penalty,” hoping to scare away the scammers. To make matters worse, the ad-personalization system can keep showing scam ads to users who have already clicked on one.
The documents span 2021 to this year, coming from finance, lobbying, engineering, and safety teams, showing both how Meta measures the scale of abuse and how reluctant it has been to crack down in ways that might hurt its revenue.
The report highlights processes that allow repeat offenders to keep buying ads. For example, a “small advertiser” promoting financial fraud might not be blocked until flagged eight times, and “bigger spenders” reportedly could rack up more than 500 strikes before removal.
It is a stark contrast to how Meta treats ordinary users. And it’s easy to see why – just four ad campaigns removed this year alone brought in $67 million. And internal memos reportedly warned managers not to “take actions that could cost Meta more than 0.15% of the company’s total revenue.”
In an answer to all this, a Meta spokesperson said the company “aggressively” addresses scam and fraud ads. They also noted that the 10% estimate was “rough and overly-inclusive,” and later reviews found many of those ads weren’t actually violating any rules.
These revelations come as regulators worldwide push Meta to do more to protect users from online scams.
The company is racing competitors, investing heavily in AI, and planning as much as $72 billion in capital expenditures this year, all while promising to cut the share of Facebook and Instagram revenue coming from scam ads. So, whether that actually happens remains uncertain.
Honestly, I think it’s clear that stronger regulatory pressure is needed on big tech giants like Meta – and Google too. Billions of people use their platforms every day, and these companies make huge money off them.
But too often, that comes at the expense of users’ wallets, as scams slip through. Social media companies can’t keep treating fraud as a side issue – protecting users has to finally be a top priority.
Much of the fraud came from marketers who triggered Meta’s warning systems, but the company only bans advertisers if its automated systems are 95% sure they’re committing fraud.
Apparently, if the system is less certain but still suspects foul play, Meta doesn’t block the ad – instead, it charges higher rates as a “penalty,” hoping to scare away the scammers. To make matters worse, the ad-personalization system can keep showing scam ads to users who have already clicked on one.
These are just a few of the scam ads you might see running on Meta’s platforms. | Image credit – Reuters
The report highlights processes that allow repeat offenders to keep buying ads. For example, a “small advertiser” promoting financial fraud might not be blocked until flagged eight times, and “bigger spenders” reportedly could rack up more than 500 strikes before removal.
It is a stark contrast to how Meta treats ordinary users. And it’s easy to see why – just four ad campaigns removed this year alone brought in $67 million. And internal memos reportedly warned managers not to “take actions that could cost Meta more than 0.15% of the company’s total revenue.”
Unfortunately, the leaked documents present a selective view that distorts Meta’s approach to fraud and scams by focusing on our efforts to assess the scale of the challenge, not the full range of actions we have taken to address the problem.
– Meta spokesperson, November 2025
Regulators are turning up the heat
These revelations come as regulators worldwide push Meta to do more to protect users from online scams.
The company is racing competitors, investing heavily in AI, and planning as much as $72 billion in capital expenditures this year, all while promising to cut the share of Facebook and Instagram revenue coming from scam ads. So, whether that actually happens remains uncertain.
Users need real protection
Honestly, I think it’s clear that stronger regulatory pressure is needed on big tech giants like Meta – and Google too. Billions of people use their platforms every day, and these companies make huge money off them.
But too often, that comes at the expense of users’ wallets, as scams slip through. Social media companies can’t keep treating fraud as a side issue – protecting users has to finally be a top priority.
And on a related front, Meta plans to tackle spam on WhatsApp by limiting how many people can ghost you. Essentially, the app will cap the number of messages you can send to strangers without getting a reply.
Follow us on Google News
Things that are NOT allowed:
To help keep our community safe and free from spam, we apply temporary limits to newly created accounts: