“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,”
— Sandeep Abraham, former Meta safety investigator.
Editor’s Note: This is the second article in our series on the real-world impacts of Artificial Intelligence. This post examines the technical settings and financial models that allow synthetic deception to bypass traditional safety filters.

As an IT professional and newspaper owner, I find that protecting our community requires us to look past the user interface and look into the actual logic used by the platforms we visit daily. Recent disclosures from thousands of pages of internal documents—spanning engineering, finance, and safety teams—have revealed that the persistence of online scams is often the result of specific, calculated technical thresholds.
The Technical Threshold: The “95% Certainty” Gap
The core reason you likely see suspicious advertisements in your feed is a technical configuration known internally as the 95% Certainty Rule. Meta’s automated AI detection systems are built with a high margin of tolerance: the platform generally only bans an advertiser if its machine-learning models estimate a minimum 95% probability of fraudulent behavior. If there is a 94% chance it is a scam…you see it in your feed.
| AI Certainty Score | Internal Classification | Platform Action |
| 95% – 100% | Confirmed Violation | ACCOUNT BANNED (Unless HVA) |
| Below 95% | “Likely Scammer” | STAYS ACTIVE + “Penalty Bid” Fees |
| HVA Status | “High-Value Account” | 500+ STRIKES allowed before removal |
This configuration creates a massive “grey zone” for any account that falls below that 95% mark. From an IT perspective, here is how that technical choice functions:
- The “Likely Scammer” Label: Accounts that the system flags as suspicious but cannot confirm with 95% certainty are not removed. Instead, they are kept active.
- The Penalty Bid System: Rather than blocking these “likely” scammers, the platform’s auction algorithm often charges them higher rates—called a “penalty bid”—to continue running their ads. This essentially creates a system where the platform monetizes suspicion while the user remains exposed.
- The 500-Strike Loophole: While a standard small business account might be shut down after 8 fraud flags, “High-Value Accounts” (those spending significant amounts) have been allowed to accrue over 500 strikes for fraudulent activity without being disabled.
By taxing suspicious activity instead of eliminating it, the platform extracts more revenue per impression from bad actors than it does from legitimate local businesses.
Cross-Platform Risks: It’s Not Just One App
While news reports have focused on Meta, the scam economy is a systemic issue across the social media landscape. In early 2026, research indicates that malicious ads in Europe alone generated approximately $5.2 billion in revenue.

- TikTok: In-feed video ads on TikTok show a fraud rate between 15% and 26%. During the 2025 holiday season, 78% of fraudulent advertisers on TikTok remained active even after being previously flagged for counterfeit goods.
- YouTube: Fraud rates for YouTube advertisements are currently estimated at 17% to 28%. Undetected fraud across Google-owned platforms is estimated to cost advertisers roughly $35 billion annually.
- X (formerly Twitter): Scammers have exploited display URL loopholes to spoof trusted news domains like “cnn.com,” tricking users into visiting fraudulent crypto sites that use deepfake impersonations.
Federal Defense: 2026 Legislative Update
The scale of these losses led to a significant federal response this month. On February 12, 2026, the National Strategy for Combating Scams Act of 2025 passed the Senate (49-47) after passing the House earlier in the month.
This legislation legally obligates the FBI to lead a unified national strategy, coordinating with more than a dozen agencies to harmonize data collection and push platforms toward standardized reporting. It also facilitates rapid data-sharing from tech companies to law enforcement to help identify cross-border criminal networks.
IT-Backed Steps to Protect Your Household
To safeguard your data and finances, the following measures are recommended:
- Clear Your “Ad Interests”: If you click on a single suspicious ad, the platform’s personalization system will likely tag you as “susceptible” and flood your feed with more scams. Periodically clear your ad preferences in your platform privacy settings.
- Verify via Off-Platform Channels: If an ad offers an 80% discount or a celebrity endorsement, never click the link. Open your browser and type the official URL yourself.
- Assume “Verified” is No Longer Binary: Professional language, polished branding, and even video calls can now be generated by AI in seconds. Treat all unsolicited requests for sensitive data as high-risk.1
Monitor the Emotional Hook: Scammers use AI to manufacture “urgency.” If a post or message pressures you to act within minutes to avoid a loss, it is almost certainly a scam.
Warning: The following section contains a significant amount of technical detail and financial data from the original investigative reports. It is provided here as an addendum for readers who want a deeper look at the specific documents and depositions that brought these issues to light.
Technical Appendix: In-Depth Investigative Details
The intersection of global digital advertising and organized financial crime reached a critical inflection point in late 2025. Following a series of landmark investigative disclosures, it became evident that the operational foundations of major social media platforms were inextricably linked to the monetization of fraudulent content. The revelation that Meta Platforms Inc. had internally projected that approximately 10.1% of its annual revenue—amounting to an estimated $16 billion—would be derived from “violating revenue,” including scams and banned goods, fundamentally altered the discourse surrounding platform accountability. This report examines the multi-faceted architecture of this fraud ecosystem, tracing its evolution from internal financial projections in late 2024 to the industrialization of deepfake-enabled deception in early 2026.
The Monetization of Malfeasance: A Special Investigation into Social Media Revenue Structures and the Proliferation of Systematic Fraud
The Financial Architecture of the Fraud Ecosystem
The structural reliance on high-risk advertising is not merely a byproduct of automated systems but a calculated component of corporate financial forecasting. Internal documents reviewed in November 2025 revealed that as early as late 2024, Meta executives were quantifying the value of problematic advertisements with high precision. These documents highlighted a bifurcated revenue stream where “higher risk” scam advertisements alone generated roughly $7 billion in annualized revenue. This category specifically includes promotions for fraudulent e-commerce, sophisticated investment schemes, illegal online casinos, and the sale of prohibited pharmaceuticals.
The internal logic governing these revenue streams suggests a prioritization of profit over platform integrity. While the platform earns approximately $3.5 billion every six months from scam advertisements that carry significant legal risk, it simultaneously estimated that total regulatory fines for these activities would likely top out at $1 billion. This created a rationalized economic model where the financial benefit of maintaining the fraud pipeline outweighed the anticipated costs of litigation and regulatory penalties by a factor of seven.
| Revenue Metric (Meta 2024-2025) | Estimated Annual Value |
| Total Projected “Violating Revenue” | $16,000,000,000 |
| Annualized Revenue: “Higher Risk” Scams | $7,000,000,000 |
| Bi-Annual High-Legal-Risk Revenue | $3,500,000,000 |
| Internal Safety Action Cost Cap | $135,000,000 |
| Anticipated Regulatory Penalties | $1,000,000,000 |
The disparity between revenue and safety investment is perhaps best illustrated by the internal enforcement caps discovered in late 2025. In the first half of that year, the teams responsible for vetting questionable advertisers were reportedly restricted from taking any enforcement actions that would cost the company more than 0.15% of its total revenue, or roughly $135 million. This artificial ceiling on safety expenditure ensured that the volume of banned ads could never reach a level that would meaningfully threaten the 10% revenue contribution from high-risk sources.
Algorithmic Arbitrage and the Penalty Bid Mechanism
The monetization of fraud is facilitated by a sophisticated “penalty bid” system that effectively taxes scammers rather than excluding them. Internal documentation indicates that Meta’s automated detection systems are calibrated to a 95% certainty threshold. An advertiser is only removed if the system is at least 95% certain that they are committing fraud; if the certainty level falls below this mark, the advertiser is not banned but is instead charged significantly higher ad rates as a “penalty”.

This mechanism creates a perverse incentive for the platform. By allowing likely scammers to remain active while charging them a premium, the platform maximizes revenue from suspicious actors. This practice also has a deleterious effect on the broader ad auction. Because scam ads are forced to bid higher and represent a significant portion of the total ad volume, they artificially inflate the auction floor for all participants. Legitimate small and medium-sized businesses (SMBs) are consequently forced to pay higher prices for reach, as they are competing against bad actors who are willing to pay “penalty” rates to access vulnerable victims.
Furthermore, a “two-tiered” enforcement system was revealed to exist for “High-Value Accounts” (HVAs). While a standard advertiser might be permanently blocked after eight strikes for financial fraud, certain HVAs were permitted to accrue more than 500 strikes for fraudulent activity without being shut down. This preferential treatment for large-scale revenue contributors suggests that the platform’s integrity systems were designed with a “monetization-first” bias, where the value of the account frequently dictated the level of scrutiny applied to its content.
The algorithmic amplification of fraud extends to the user experience through the “vortex of fraud.” Ad-personalization systems, designed to show users more of what they interact with, frequently fail to distinguish between legitimate interest and accidental engagement with a scam. Users who click on a single fraudulent advertisement are algorithmically identified as being susceptible to such content, leading to a feedback loop where they are served an increasing volume of high-risk advertisements. Internal engineers reportedly acknowledged in April 2025 that this system made it “easier to advertise scams on Meta platforms than Google,” reinforcing the perception of the platform as a high-efficiency conduit for illicit activity.
Case Studies in Victimization: From Stock Manipulation to E-Commerce Deception
The human cost of these systemic failures became a central theme of judicial proceedings in early 2026. A prominent class-action lawsuit filed on February 5, 2026, details how Meta facilitated sophisticated “pump and dump” stock manipulation schemes. The complaint highlights a specific incident in April 2025 involving the Chinese stock Jayud Global Logistics (JYD), which allegedly cost Facebook and Instagram users over $500 million. Scammers utilized Meta’s tracking technologies and AI-driven Ads Manager to curate targeted advertisements for users interested in investing. Once a user clicked the ad, they were funneled into private WhatsApp groups where they were pressured into investing in the low-value stock, only for the scammers to “pull the rug” once the price had been artificially inflated.
This case emphasizes a recurring pattern: the use of unlicensed celebrity likenesses and the promise of “life-changing” investment opportunities to lure vulnerable demographics. The plaintiffs argue that Meta’s recent investments in generative AI tools actually worsened the problem, as these tools allowed scammers to generate hundreds of variations of advertisements optimized for engagement, bypassing rudimentary detection filters.
Smaller-scale, but equally damaging, e-commerce scams have also proliferated. The case of Calise v. Meta Platforms, which progressed through the Northern District of California in late 2025, centers on a user who was defrauded after responding to a Facebook ad for a car-engine assembly kit. Despite the platform’s public claims in its terms of service and community standards that it would “take appropriate action” against harmful and fraudulent content, the lawsuit alleges that Meta routinely fails to uphold these contractual obligations. In September 2025, a federal judge ruled that the company’s terms of service could indeed be viewed as a legally enforceable contract, a decision that Meta has since sought to appeal, arguing that such statements are merely “aspirational”.
| Major Fraud Incidents & Legal Actions (2025-2026) | Reported Impact / Status |
| Jayud Global Logistics (JYD) Stock Scam | $500,000,000 in losses |
| Calise v. Meta (E-commerce Fraud) | Proceeding to 9th Circuit Appeal |
| South Carolina Gym Auction Lawsuit | Alleged $4B in overcharges |
| Spanish GDPR Battle (Meta) | €479,000,000 penalty |
| Arkansas TikTok Deceptive Trade Lawsuit | Trial set for October 2026 |
The volume of user complaints underscores the magnitude of the issue. By late 2025, reports indicated that Meta’s platforms were involved in approximately one-third of all successful scams in the United States. In the United Kingdom, the situation was even more dire, with Meta linked to 54% of all social media-related scams. Despite this, internal staff estimates showed that the company ignored or incorrectly rejected 96% of the 100,000 valid reports filed by users every week.
The Regulatory Counter-Offensive: 2026 Legislative Developments
The systemic nature of the problem prompted a coordinated federal response in the United States. In December 2025, the “National Strategy for Combating Scams Act of 2025” (S. 3355) was introduced by Senator Kirsten Gillibrand and Congressman Gabe Amo. The legislation aims to rectify the fragmented nature of federal anti-fraud efforts, where at least 13 different agencies currently operate under separate mandates with minimal coordination.
The Act legally obligates the FBI to lead a unified national strategy, creating a federal working group to coordinate efforts between tech companies, financial institutions, and law enforcement. The bill specifically addresses the disproportionate impact of scams on older adults, who account for approximately 30% of financial losses from fraud, with an average loss of $83,000 per incident.
| Legislative Milestone (S. 3355) | Date | Chamber / Action |
| Initial Introduction | Dec 4, 2025 | Senate / Press Conference |
| House Passage (Roll No. 56) | Feb 4, 2026 | House / Passed (215-210) |
| Senate Passage (Record Vote 37) | Feb 12, 2026 | Senate / Passed (49-47) |
| Presentation to President | Feb 12, 2026 | Executive / Presented |
Simultaneously, the Federal Trade Commission (FTC) has intensified its oversight. In November 2025, the FTC moved to appeal a court ruling in its long-standing monopolization case against Meta, arguing that the company’s dominance in personal social networking was maintained by “buying significant competitive threats” such as Instagram and WhatsApp. The FTC’s Bureau of Competition Director, Daniel Guarnera, emphasized that this dominance has allowed the platform to maintain record profits while neglecting consumer protection.
On the state level, attorneys general have filed independent lawsuits. In February 2026, the Arkansas Attorney General’s lawsuit against TikTok was set for trial in October 2026, focusing on the platform’s alleged violation of the Deceptive Trade Practices Act. The suit characterizes the app as a “Chinese Trojan horse” that falsely claims mature content is appropriate for teenagers. Similarly, New Mexico has pursued Meta for its alleged failure to protect children from sexual exploitation, with a trial scheduled for early 2026.
Global Investigations and the Walled Garden Crisis
The European Commission has pioneered a different approach, utilizing the Digital Markets Act (DMA) and antitrust rules to challenge the “closed platforms” of big tech. In October 2025, the Commission opened a formal investigation into Meta’s WhatsApp Business Solution terms, which effectively banned third-party AI assistants from the platform. The Commission argues that by excluding competitors from WhatsApp while maintaining its own “Meta AI,” the company is abusing its dominant position to gain an unfair advantage in the AI market.
This regulatory pressure has forced some concessions. Starting in January 2026, Meta users in the EU were given the choice to opt for “less personalized ads,” a measure intended to comply with the DMA after the company was fined €200 million earlier in the year. However, internal sources suggest the company remains resistant to fundamental changes, stating they will not propose new modifications “unless circumstances change” despite the risk of further fines reaching 5% of global daily turnover.
The challenges for digital advertising extend to verification services. DoubleVerify, a major player in the fraud-detection space, became the target of a shareholder derivative lawsuit in December 2025. The complaint alleges that DoubleVerify misled investors about its ability to track fraud within the “walled gardens” of Meta, Google, and TikTok. Research published by Adalytics in March 2025 claimed that these verification services frequently missed non-human traffic, billing customers for ad impressions served to bots that had openly identified themselves as such. This has led to a broader crisis of confidence, with the World Federation of Advertisers estimating that ad fraud will exceed $50 billion globally in 2025, making it the second-largest source of income for organized crime after the narcotics trade.
The Industrialization of Deepfakes: The 2026 Fraud Tipping Point
As we move through 2026, the nature of digital fraud has shifted to the large-scale “industrialization” of deepfakes. Analysis from early 2026 indicates a “fraud tipping point” where agentic AI and high-fidelity video/voice cloning have overwhelmed legacy controls. The cost of creating these fakes has plummeted; “Deepfake-as-a-Service” platforms now offer hyper-realistic video generation for as little as $10 per month on dark web markets.
| Deepfake & AI Fraud Trends (2025-2026) | Statistic / Projection |
| Deepfake Scam Success Rate | 77% of victims lost money |
| Human Detection Rate (High-Quality Video) | 24.5% |
| Increase in Deepfake Biometric Bypass | 704% (in 2023) |
| Est. US Fraud Losses from GenAI (2027) | $40,000,000,000 |
| UK AI Scam Losses (9 mo. to Nov 2025) | £9,400,000,000 |
The sophistication of these tools allows for real-time replication of voices and gestures. Multimodal AI integration enables “CEO video call” scams where an impersonator can conduct a live interview in real-time. Scammers now require as little as three seconds of audio to create a voice clone that is 85% accurate, a tool that was famously used in early 2024 to create a robocall of President Joe Biden for less than $1.
A disturbing new frontier is the “ghost employee” threat. Criminal networks—including those linked to the North Korean regime—are using AI avatars to interview for remote engineering roles. These deepfake employees infiltrate US companies to steal salaries and proprietary data. By late 2025, UK consumers alone had lost an estimated £9.4 billion to AI-based scams over a nine-month period.
Platform Pivot: Superintelligence and Predictive Defenses
In the face of these escalating threats, platforms have attempted to pivot toward “AI-first” safety models. Meta’s 2026 capital expenditure guidance of $115-$135 billion reflects a massive investment in automated defense systems and its “Superintelligence Labs”. The company reported that in 2025, its teams removed more than 134 million scam advertisements across its platforms.
The shift in defense strategy is moving from reactive, rules-based thresholds to “fully predictive” behavioral intelligence. By 2026, the goal for high-performing fraud programs is to model normal user and device behavior in real-time, catching subtle anomalies—such as unusual mouse movements—before a transaction is completed. Meta has claimed that the expansion of its facial recognition technology more than doubled the volume of celebrity-impersonation ads it was able to detect during testing.
However, these technological solutions are not without their own sets of problems. AI-driven automation has drawn skepticism from legitimate advertisers due to a lack of transparency and frequent “false positives”. Furthermore, the “95% certainty rule” remains a point of contention; as long as the threshold for banning remains high to avoid revenue loss, scammers will continue to exploit the remaining 5% of uncertainty.
Market Impact and the Erosion of Digital Trust
The cumulative effect of systematic fraud monetization is a profound erosion of digital trust. By early 2026, surveys indicated that Americans were receiving an average of 14 scam messages per day across text, email, and social media. The “time tax” required to distinguish real from fake online content was estimated at 114 hours per year for the average person—the equivalent of nearly three full workweeks.
For the advertising industry, this environment creates a “vortex of audience pollution”. As scammers infiltrate the platform and inflate auction prices, the return on ad spend (ROAS) for legitimate businesses becomes increasingly difficult to measure. A former Meta product manager filed a whistleblower complaint in August 2025 alleging that the company artificially inflated ROAS metrics by counting shipping fees and taxes as revenue.
Efforts to bring transparency to the system, such as the non-profit CollectiveMetrics.org launched by former Meta executives Rob Leathern and Rob Goldman, represent an attempt to hold platforms accountable through independent data analysis. Their mission is to reveal how opaque ad systems generate massive profits for tech giants by providing clear metrics on the prevalence of online fraud.
Future Outlook: Agentic AI and the Decentralization of Identity
As we look toward 2027, the battle against fraudulent advertising is expected to evolve into a conflict over “agentic breaches” and “non-human identity sprawl”. AI agents can be poisoned via “indirect prompt injection” to become persistent, autonomous threats within corporate systems.
The response from the tech industry is likely to involve a move toward decentralized identity frameworks tied to hardware-based verification, such as mobile device passkeys. Gartner predicts that by late 2026, “standalone IDV” (identity verification) will be considered obsolete, replaced by continuous, multi-layered monitoring.
The central tension remains economic. The revelation that social media companies could earn 10% of their revenue from scams while only spending 0.15% on safety suggests that the “fraud economy” is an integral part of the modern digital landscape. Until regulatory fines and legal liabilities exceed the multi-billion-dollar profits generated by these ads, the incentive for platforms to maintain a “permissive” environment will persist. The introduction of the National Strategy for Combating Scams Act of 2025 represents the first significant attempt to realign these incentives, but its success will depend on the ability of federal agencies to keep pace with the industrialization of AI-driven deception.
Help support our work: Check out these titles on Amazon:



Works cited
- How Meta’s Scam Ad Revenue Impacts PPC Advertisers – ClickGuard, accessed February 17, 2026, https://www.clickguard.com/blog/meta-scam-ad-revenue-impacts-ppc-advertisers/
- Report: Meta earns about $7 billion a year on scam ads | Mashable, accessed February 17, 2026, https://mashable.com/article/meta-7-billion-dollars-scam-ads
- Leaked Meta documents reveal $16 billion revenue from fraudulent ads and banned products – Nation Thailand, accessed February 17, 2026, https://www.nationthailand.com/news/world/40057906
- Escalating Threats of AI Deepfakes in 2026 – Reputation Management Company, accessed February 17, 2026, https://reputationace.co.uk/escalating-threats-of-ai-deepfakes-in-2026/escalating-threats-of-ai-deepfakes-in-2026/
- Report: Deepfake-as-a-service to fuel surge in corporate fraud | SC Media, accessed February 17, 2026, https://www.scworld.com/brief/report-deepfake-as-a-service-to-fuel-surge-in-corporate-fraud
- 2026 State of the Scamiverse: The Year Scams Went Linkless – McAfee, accessed February 17, 2026, https://www.mcafee.com/blogs/wp-content/uploads/2026/01/Scamiverse.pdf
- Brandon Gill | Congress.gov, accessed February 17, 2026, https://www.congress.gov/member/brandon-gill/G000603
- 2026 State of the Scamiverse: The Year Scams Went Linkless – McAfee, accessed February 17, 2026, https://www.mcafee.com/blogs/wp-content/uploads/2026/01/Scamiverse.pdf
- Gen Q4/2025 Threat Report, accessed February 17, 2026, https://www.gendigital.com/blog/insights/reports/threat-report-q4-2025
- Deepfake fraud goes industrial: How AI scams are scaling globally in 2026, accessed February 17, 2026, https://www.thenews.com.pk/latest/1391321-deepfake-fraud-goes-industrial-how-ai-scams-are-scaling-globally-in-2026
- AI Security Strategies for 2026 Fraud Surge – AI CERTs News, accessed February 17, 2026, https://www.aicerts.ai/news/ai-security-strategies-for-2026-fraud-surge/
- Deepfake Statistics & Trends 2026 | Key Data & Insights – Keepnet Labs, accessed February 17, 2026, https://keepnetlabs.com/blog/deepfake-statistics-and-trends
- Leaked Meta documents reveal $16 billion revenue from fraudulent ads and banned products – Nation Thailand, accessed February 17, 2026, https://www.nationthailand.com/news/world/40057906
- Escalating Threats of AI Deepfakes in 2026 – Reputation Management Company, accessed February 17, 2026, https://reputationace.co.uk/escalating-threats-of-ai-deepfakes-in-2026/escalating-threats-of-ai-deepfakes-in-2026/
- AI Security Strategies for 2026 Fraud Surge – AI CERTs News, accessed February 17, 2026, https://www.aicerts.ai/news/ai-security-strategies-for-2026-fraud-surge/
- Report: Deepfake-as-a-service to fuel surge in corporate fraud | SC Media, accessed February 17, 2026, https://www.scworld.com/brief/report-deepfake-as-a-service-to-fuel-surge-in-corporate-fraud
- Deepfake Statistics & Trends 2026 | Key Data & Insights – Keepnet Labs, accessed February 17, 2026, https://keepnetlabs.com/blog/deepfake-statistics-and-trends
- Deepfake fraud goes industrial: How AI scams are scaling globally in 2026, accessed February 17, 2026, https://www.thenews.com.pk/latest/1391321-deepfake-fraud-goes-industrial-how-ai-scams-are-scaling-globally-in-2026
- Meta Reports Fourth Quarter and Full Year 2025 Results – Meta Investor Relations, accessed February 17, 2026, https://investor.atmeta.com/investor-news/press-release-details/2026/Meta-Reports-Fourth-Quarter-and-Full-Year-2025-Results/default.aspx
- Meta – Adgully.com: Latest Advertising, Marketing & Media News, accessed February 17, 2026, https://www.adgully.com/tag/30051
- Scams Are Bad for Business: Our Ongoing Efforts to Fight Fraud, accessed February 17, 2026, https://about.fb.com/news/2025/12/scams-are-bad-for-business-metas-efforts-to-fight-fraud/
- Meta highlights ongoing measures to tackle online scams – Adgully.com, accessed February 17, 2026, https://www.adgully.com/post/9777/meta-highlights-ongoing-measures-to-tackle-online-scams
- Meta removes 134 million scam ads in 2025 amid expanding fraud crisis – PPC Land, accessed February 17, 2026, https://ppc.land/meta-removes-134-million-scam-ads-in-2025-amid-expanding-fraud-crisis/
- Class Action Lawsuit Claims Meta Knowingly Facilitates Pump-And …, accessed February 17, 2026, https://www.classaction.org/news/class-action-lawsuit-claims-meta-knowingly-facilitates-pump-and-dump-scammers-with-ads-on-facebook-instagram
- Fraud forecast 2026: Experts share predictions to help protect what’s real in the year ahead, accessed February 17, 2026, https://www.miteksystems.com/blog/2026-fraud-forecast-what-to-do-now-to-protect-whats-real-in-the-year-ahead


Leave a Reply
You must be logged in to post a comment.