Experts: AI Large Model Data Poisoning is a New Form of Unfair Competition

robot
Abstract generation in progress

The “March 15” Gala reported on the chaos of “poisoning” in AI large models. Li Fumin, an expert at the Social Governance Intelligent Research Institute of Shandong University of Finance and Economics, stated that targeting large models through business activities like GEO to conduct targeted training and guide AI to generate specific product or service recommendations is essentially a new form of unfair competition and consumer deception. This involves covert marketing and fabricated facts using technical means, causing consumers to unknowingly receive embedded marketing content. The harm and illegality of this behavior should be highly emphasized.

On one hand, these actions violate consumers’ rights to information and fair trade as protected by law; on the other hand, they constitute false or misleading commercial propaganda using technical means, disrupting normal recommendation algorithms and market competition, and thus constitute unfair competition.

Addressing these AI poisoning behaviors requires a multi-pronged approach. Regulatory authorities should include AI inducement marketing in key monitoring efforts and strengthen law enforcement; AI operators should enhance scrutiny of data sources and output filtering, and establish traceability mechanisms; consumers should improve their awareness of the commercial nature of AI-generated information and actively protect their rights through complaints and reports. (China News Service)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments