Online ads of all kinds and forms are using artificial intelligence to generate content that looks convincing but is misleading, confusing the public. In its annual Google ad safety report, the company said that bad actors use generative AI to mass-deliver false content. In response, the company strengthened its Gemini automated defense system to conduct real-time blocking before harmful information reaches users. The current block rate for policy-violating ads has reached 99%.
Google says it shortened the time to detect policy-violating ads to milliseconds.
Last year, Google intercepted or removed more than 8.3 billion ads, including 602 million policy-violating ads closely related to scams. This figure is higher than the 5.1 billion in 2024; about 24.9 million advertiser account profiles were suspended, and more than 4 million accounts were suspended due to scam-related activities.
According to the ad safety report, generative AI Gemini has become the core of the defense system, able to intercept more than 99% of content before policy-violating ads are even served. In the past, analyzing ad digital assets took seconds to minutes; now it has been reduced to the millisecond level. This speed advantage allows the system to immediately review most responsive search ads, while at the same time, Google can also block scam ads in real time at the same scale as bad actors mass-produce them with AI.
Sharma, Vice President and General Manager of Google Ads Privacy & Safety, said that generative AI is part of Google’s defense system, and its progress has significantly improved the company’s ability to combat problematic content.
Gemini can now analyze hundreds of billions of signals, including account age, behavioral clues, and campaign patterns, so it can better understand Nuances of Advertiser Intent— the fine details of advertisers’ real intent. This means they can roughly determine whether an ad is legitimate, or whether the advertiser’s intent is likely to be malicious. Last year, the number of cases where automation mistakenly paused legitimate business accounts also decreased by 80%, showing that automated technology has achieved a better balance between protecting the rights of trustworthy businesses and intercepting threats.
Experts say the future will be an AI-versus-AI war.
Emarketer analyst (Nate Elliott) said this is an old problem, only the scale has gotten bigger. The biggest difference is that AI provides an advantage in speed and scale to both legitimate and illegitimate actors.
The FBI’s Internet Crime Report shows that last year, losses from AI-related scams totaled more than $893 million. Matt Seitz (Matt Seitz), director of the University of Wisconsin–Madison’s AI Center, said the problem with generative AI is huge, and it cannot be solved by humans alone—now the current developments have become a confrontation between AI and AI.
A Google spokesperson emphasized that the basis for the official determination of whether an ad violates policy is not the production method, because many legitimate businesses also use AI tools. However, that creates another problem: can AI recognize what qualifies as a good-faith ad?
This article says Google’s annual report Gemini achieves millisecond interception, blocking 99% of scam ads, first appeared on Chain News ABMedia.
Related Articles
AI consumes 80% of global venture capital; Q1 2026 sees a pull of $242 billion: How crypto players should respond to the reallocation of capital
Hong Kong Police Warn of 'AI Quantitative Trading' Crypto Scam, Woman Loses HK$7.7 Million
Honor's Lightning Robot Wins Beijing 2026 Humanoid Robot Half Marathon with 50:26 Finish
Meta Stock Rises 1.73% as Company Plans 8,000-Job Layoff Starting May 20
Ethereum Co-founder Lubin: AI Will Be Critical Turning Point for Crypto, But Tech Giant Monopoly Poses Systemic Risk