Elon Musk’s xAI Sues Colorado Over AI Law as Fight Over State Regulation Intensifies

In brief

  • Elon Musk’s AI company filed a federal lawsuit seeking to block Colorado’s AI law before it takes effect on June 30.
  • The case reflects a broader conflict over whether states or the federal government should regulate artificial intelligence.
  • The company faces separate lawsuits and investigations tied to Grok’s image-generation tools.

Elon Musk’s artificial intelligence company, xAI, has filed a federal lawsuit seeking to block Colorado from enforcing a new law regulating high-risk AI systems. In court documents filed on Thursday, Musk’s lawsuit targets Colorado Senate Bill 24-205, scheduled to take effect on June 30, which requires developers of AI systems to disclose risks and take steps to prevent algorithmic discrimination in areas such as employment, housing, healthcare, education, and financial services. According to the complaint, the company argues the measure would force developers to modify how AI systems operate and could restrict how models generate responses.

“SB24-205 is decidedly not an anti-discrimination law. It is instead an effort to embed the State’s preferred views into the very fabric of AI systems,” attorneys for xAI wrote. “Its provisions prohibit developers of AI systems from producing speech that the State of Colorado dislikes, while compelling them to conform their speech to a State-enforced orthodoxy on controversial topics of great public concern.”  The lawsuit asks a federal court to declare the law unconstitutional and block its enforcement, which xAI says violates the First Amendment by forcing changes to Grok’s outputs to align with the state’s views on diversity and equity. The lawsuit also argues that SB24-205 improperly regulates activity beyond Colorado, and is too vague to enforce fairly, and favors AI systems that promote “diversity” while penalizing those that do not. "By requiring “developers” and “deployers” to differentiate between discrimination that Colorado disfavors and discrimination that Colorado favors, SB24-205 compels Plaintiff xAI—a “developer” under the law—to alter Grok, forcing Grok’s output on certain State-selected subjects to conform to a controversial, highly politicized viewpoint,” the lawsuit said. “But the State “may not compel [xAI] to speak its own preferred messages.”

The legal challenge comes amid a growing conflict between technology companies and government officials over how artificial intelligence should be regulated. Several states, including Colorado, New York, and California, have introduced rules addressing risks posed by generative AI tools. At the same time, the Donald Trump administration has moved to establish a national AI regulatory framework. The lawsuit also arrives as scrutiny of xAI’s chatbot Grok continues to increase. Several lawsuits filed in 2026 accuse the company of allowing Grok to generate non-consensual deepfake images. In March, a class-action complaint filed by three Tennessee minors alleged that Grok produced explicit images depicting them without consent. The city of Baltimore also sued, claiming Grok generated up to 3 million sexualized images in a matter of days, including thousands depicting minors. xAI did not immediately respond to a request for comment by Decrypt.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Data reveals that “Claude got less intelligent” isn’t an urban legend; an AI model’s instability can become a corporate risk.

The article discusses the phenomenon of unstable performance of LLMs (large language models) in real-world applications by AI companies, calling it “de-intelligence,” and provides examples to illustrate its real impact on business workflows. Data shows that most mainstream models are currently in a degraded state, affecting companies’ productivity and stability. Companies need to start treating model stability as a new standard; otherwise, they will face infrastructure risks.

ChainNewsAbmedia18m ago

OpenAI Updates Codex to AI Agent That Controls Desktop, Automating Development Workflows

OpenAI's upgraded Codex evolves from a coding assistant to an autonomous agent for desktop environments, capable of managing applications, automating workflows, and integrating with over 100 apps. This shift enhances task continuity and workflow automation, reflecting a competitive landscape in AI coding tools.

GateNews20m ago

Google Integrates AI Search into Chrome, Enabling Conversational Web Browsing

Google is enhancing Chrome with AI-powered search, allowing conversational browsing and context-aware responses. The new functionality also features multi-tab integration, improving user experience for various tasks by consolidating open tabs and providing tailored information.

GateNews50m ago

Shinsegae Group Abandons OpenAI Collaboration for Reflection AI Partnership, Shifts Retail Strategy

Shinsegae Group has halted its partnership with OpenAI, opting for an expanded collaboration with Reflection AI to enhance AI in retail operations. This decision aims to streamline efforts and address concerns about AI commerce effectiveness.

GateNews1h ago

OpenAI and Google Add Support for HWP Format, Hancom Seeks Valuation Rebound

OpenAI's ChatGPT now supports HWP and HWPX file formats, enabling Korean users to upload documents directly for analysis without conversion. This enhances usability for local businesses and could boost Hancom's stock recovery amidst recent declines.

GateNews1h ago

Google Removes 175.5M Ads in South Korea Using AI Enforcement, Suspends 326K Advertiser Accounts

In 2025, Google removed 175.5 million violating ads in South Korea using AI, suspended 326k accounts, and faced a $50 million fine for privacy violations, highlighting a trend of increasing enforcement and AI's role in combating ad fraud.

GateNews1h ago
Comment
0/400
No comments