header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Data Poisoning Leads to Frequent Errors in AI Recommendations; China's State Administration for Market Regulation Takes Action

According to 1M AI News monitoring, the Generative AI era has given rise to a billion-scale "GEO (Generative Engine Optimization)" gray industry chain. Black hat GEO practitioners use mass production to create fake rankings and poison low-quality content to large-model datasets, causing mainstream AI models like BeanBag, DeepSeek, Wenxin Yiyan, and Kimi to frequently output incorrect recommendations. Testing shows that fabricating a nonexistent health product brand only requires purchasing black hat services, and within half a day, it can be prioritized by multiple large models, forming a mature paid loop, leading to a significant problem of bad money driving out good money in the industry.


China's Generative AI users have exceeded 515 million, and the GEO market is expected to reach 24 billion yuan by 2030. Platforms are accelerating their countermeasures against "poisoning" behavior, but optimization techniques and model iterations have led to a long-term cat-and-mouse game. China's State Administration for Market Regulation has included AI-generated advertising rectification in its 2026 work priorities, with the China Academy of Information and Communications Technology initiating trustworthy evaluation of GEO services, industry associations and the AIIA Alliance successively issuing self-discipline initiatives. Experts urge regulators to quickly step in, platforms to clarify rules, promote the transition of GEO from rampant growth to compliant and healthy development, and avoid continued pollution of the AI ecosystem.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish