I’m thrilled to sit down with Anastasia Braitsik, a global leader in SEO, content marketing, and data analytics, whose expertise in digital marketing offers unparalleled insights into the complex world of online advertising. Today, we’re diving into a pressing issue: the prevalence of scam ads on social media platforms like Meta. Our conversation explores the scale of fraudulent advertising, the mechanisms that allow it to persist, and the delicate balance between revenue goals and user safety. Anastasia sheds light on internal policies, enforcement challenges, and what this means for advertisers and users alike.
How do internal documents shed light on the scale of scam ads on Meta’s platforms, and what do they suggest about the financial impact?
These internal documents, as reported, paint a staggering picture. They estimate that around 10% of Meta’s ad revenue for 2024—roughly $16 billion—could be tied to scam ads and banned goods. That’s a massive chunk of income coming from questionable sources. While Meta disputes the exact figures, calling them overly broad, it highlights a critical issue: the sheer volume of problematic ads slipping through the cracks on platforms like Facebook and Instagram.
Can you break down what ‘higher-risk’ scam ads are and how prevalent they are on Meta’s platforms?
‘Higher-risk’ scam ads are those flagged as showing clear signs of fraud, like misleading claims or outright deception. According to the documents, Meta reportedly displays about 15 billion of these ads daily across its platforms, including Facebook, Instagram, and WhatsApp. That’s an overwhelming number, and it’s no surprise that they’re estimated to bring in around $7 billion annually for the company. It shows how deeply embedded these ads are in the ecosystem.
What can you tell us about Meta’s penalty bid system for suspected scam advertisers and how it operates?
Meta’s penalty bid system is a fascinating, if controversial, approach. Instead of outright banning suspected scammers, Meta charges them higher rates to run their ads. The idea is to make it less profitable for them while still keeping them in the auction system. It’s a financial deterrent, but it also means these ads are still visible to users, competing against legitimate advertisers who might not even realize they’re up against inflated bids.
How does this penalty system impact legitimate advertisers who are trying to reach their audience?
For legitimate advertisers, this system can be a real headache. They’re often unaware they’re bidding against suspected scammers who are paying penalty rates, which can drive up costs per thousand impressions, or CPM. It creates an uneven playing field and raises concerns about brand safety, as their ads might appear alongside fraudulent content. It’s a hidden cost that many businesses don’t account for when planning campaigns.
Why do you think Meta opts to keep suspected scammers in the system at higher rates rather than banning them outright?
It seems to come down to a balance of revenue and enforcement. Banning advertisers outright means cutting off a revenue stream, even if it’s from questionable sources. By imposing higher rates, Meta can still profit while theoretically discouraging bad actors. But it also suggests a hesitation to take decisive action, possibly due to the sheer scale of the problem or the risk of impacting financial targets. It’s a pragmatic approach, but it raises ethical questions.
Can you explain Meta’s threshold for banning advertisers suspected of fraud and how it’s applied?
Meta’s policy, as revealed in the documents, is pretty specific. They only ban advertisers when their automated systems are at least 95% certain that fraud is occurring. If an advertiser falls below that threshold, they’re hit with higher ad rates as a penalty but can keep running their campaigns. It’s a high bar for a ban, which means a lot of suspicious activity might continue under the radar for longer than users or legitimate advertisers would like.
How does this policy disproportionately affect smaller advertisers compared to larger ones on the platform?
Smaller advertisers get hit hard by this. They’re often banned after just eight flags for financial fraud, while larger ‘High Value Accounts’ can accumulate hundreds of strikes—sometimes over 500—without being shut down. It’s a clear double standard, likely driven by the revenue these big accounts bring in. Smaller players don’t have the same wiggle room, which can stifle their ability to compete or recover from even minor missteps.
What insights do the internal reviews provide about why it might be easier to run scam ads on Meta compared to other platforms like Google?
The internal review mentioned in the documents concluded that it’s simply easier to advertise scams on Meta’s platforms than on Google. While the specifics aren’t detailed, it could point to differences in ad review processes, enforcement rigor, or even the sheer volume of ads Meta handles daily. Google might have stricter upfront vetting or faster response mechanisms, whereas Meta’s scale and policies—like the penalty bid system—might create loopholes that scammers exploit.
Can you elaborate on the revenue guardrails Meta has set for anti-scam enforcement and what they imply?
Meta reportedly capped anti-scam enforcement actions to no more than 0.15% of total revenue, which translates to about $135 million based on their forecasts. It’s a tiny fraction when you consider their overall earnings, and it signals a limit on how much they’re willing to invest—or lose—in the fight against scams. It’s a stark reminder that financial priorities can shape how aggressively a platform tackles fraud, even when user trust is on the line.
How do you see Meta balancing their financial goals with the pressing need to curb scam ads on their platforms?
It’s a tightrope walk. On one hand, Meta has taken steps like reducing scam ad reports by 58% over the past 18 months and removing over 134 million pieces of fraudulent content in 2025, according to their spokesperson. On the other, policies like revenue guardrails and penalty bids suggest a reluctance to sacrifice too much income. They’re aiming to lower scam revenue from 10.1% in 2024 to under 6% by 2027, but whether that’s ambitious enough—or prioritizes users over profit—remains a debate.
What is your forecast for the future of scam prevention on social media platforms like Meta, given these challenges?
I think we’re at a crossroads. Platforms like Meta have the tech and data to significantly reduce scam ads, but it’ll require a shift in priorities—potentially at the expense of short-term revenue. With increasing regulatory scrutiny, like investigations from the SEC and pressure from bodies like the UK Payment Systems Regulator, I expect tighter policies and more transparency in the next few years. But the real change will come from user and advertiser demand for safer spaces. If they push hard enough, platforms will have to adapt faster to maintain trust and market share.