Alerts5.1.26

DOJ Intervenes in Lawsuit Challenging Colorado’s ‘Algorithmic Discrimination’ Law

Artificial Intelligence Law Firm

Highlights
  • AI company xAI LLC filed suit in the U.S. District Court for the District of Colorado seeking to enjoin Colorado’s SB24-205 — a law which would impose duties on AI “developers” and “deployers” to prevent “algorithmic discrimination” — before its June 30, 2026 effective date.
  • The U.S. Department of Justice (DOJ) intervened in the lawsuit, filing its own complaint alleging that SB24-205 violates the Equal Protection Clause by compelling and authorizing discrimination based on race, sex, religion, and other protected characteristics.
  • The DOJ’s intervention is consistent with the administration’s broader posture toward state AI regulation based on the December 2025 Executive Order establishing an AI Litigation Task Force and the March 2026 National AI Legislative Framework.
 

On April 9, 2026, xAI LLC (xAI), the developer of the large language model Grok, filed a lawsuit in the U.S. District Court for the District of Colorado against Colorado Attorney General Philip J. Weiser, challenging the constitutionality of Colorado Senate Bill 24-205 (SB24-205), titled “Consumer Protections for Artificial Intelligence.” Two weeks later, the DOJ moved to intervene in the case, filing its own Complaint in Intervention alleging that SB24-205 violates the Equal Protection Clause of the Fourteenth Amendment. The court granted the intervention. SB24-205 is set to take effect on June 30, 2026, and the litigation seeks to enjoin enforcement of the law before that date.

Colorado’s SB24-205

SB24-205 seeks to impose duties on AI “developers” and “deployers” to prevent “algorithmic discrimination,” which the statute defines as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group” on the basis of protected characteristics such as age, race, sex, disability, religion, and others. However, the statute expressly exempts from that definition the use of AI systems for “expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination” as well as testing procedures.

The law imposes a duty on developers of “high-risk” AI systems to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” The term “high-risk” is defined to include a system that “is a substantial factor in making” decisions regarding areas such as educational opportunities, employment opportunities, and healthcare services.

The statute also mandates extensive disclosure obligations, requiring developers to disclose to deployers, the public, and the Colorado Attorney General information about their practices for evaluating and mitigating algorithmic discrimination. Deployers face additional duties, including implementing risk management policies, completing annual impact assessments, and conducting annual reviews of AI deployments.

xAI’s Complaint

xAI challenges SB24-205 on multiple constitutional grounds and seeks declaratory and injunctive relief against enforcement.

  • First Amendment. xAI argues that SB24-205’s algorithmic discrimination provisions compel it to alter Grok’s expressive output to conform to Colorado’s preferred viewpoint on fairness and equity, in violation of the First Amendment on grounds of content and viewpoint discrimination, and compelled speech relating to the disclosure demands. According to xAI, compliance would require “redesigning, retraining, or constraining the Grok model” by recalibrating how it decides what information to include in responses, hard-coding additional guardrails, or re-weighting training datasets.
  • Dormant Commerce Clause. xAI argues that SB24-205 impermissibly regulates extraterritorial conduct because it applies to any AI system affecting even a single Colorado resident, regardless of where the system is developed, deployed, or used. xAI also contends that the law fails the Pike balancing test because the burdens it imposes on interstate commerce, including the potential need to retrain models nationwide, are “clearly excessive” relative to its speculative local benefits.
  • Due Process (Vagueness). xAI contends that SB24-205 is unconstitutionally vague because it fails to adequately define essential terms such as “algorithmic discrimination,” “high-risk artificial intelligence system,” “reasonable care,” and others. xAI argues that leaving developers without fair notice of their obligations and giving the Attorney General virtually unfettered enforcement discretion threatens its due process rights.
  • Equal Protection. xAI argues that the law’s carveout for AI systems that expand pools “to increase diversity or redress historical discrimination” codifies impermissible racial and characteristic-based classifications without a compelling justification, in violation of the Equal Protection Clause.

The DOJ’s Intervention

On April 24, 2026, the Department of Justice filed a Complaint in Intervention pursuant to the Civil Rights Act of 1964, 42 U.S.C. § 2000h-2, after the Acting Attorney General certified that the case is “of general public importance.” The DOJ’s complaint focuses exclusively on the Equal Protection Clause, raising two counts.

  • Compelled Discrimination. DOJ alleges that SB24-205 violates the Equal Protection Clause by effectively compelling AI developers and deployers to discriminate based on race, sex, religion, and other protected characteristics. The government argues that by imposing disparate-impact liability based on “statistics alone,” the statute forces regulated entities to engage in demographic-conscious engineering of their AI models, like recalibrating algorithms to eliminate unintentional statistical disparities, which in “zero-sum” contexts like hiring or admissions would allegedly entail intentional discrimination against other groups.
  • Authorized Discrimination. DOJ separately challenges SB24-205’s express exemption for AI systems used “to increase diversity or redress historical discrimination.” The government argues that this carveout authorizes intentional differential treatment based on protected classes without the constitutionally required justification.

 

DOJ’s intervention is consistent with the administration’s broader posture toward state AI regulation, which we previously covered. The December 11, 2025 Executive Order, “Ensuring a National Policy Framework for Artificial Intelligence,” specifically named “a new Colorado law” as a problematic state law, stating that it “may even force AI models to produce false results to avoid a ‘differential treatment or impact’ on protected groups.” That Executive Order also established an “AI Litigation Task Force . . . whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in [the] Executive Order.” DOJ’s complaint itself cites these policy pronouncements, embedding the intervention within the administration’s stated commitment that “United States AI companies must be free to innovate without cumbersome regulation.”

Takeaways

DOJ’s intervention signals that the federal government is prepared not only to challenge state AI regulations through litigation, but also to intervene as a party in private lawsuits that raise constitutional issues the administration views as aligned with its AI policy agenda. Companies and state legislators alike should take note of several implications:

  • State AI regulation faces heightened federal scrutiny. The combination of the December 2025 Executive Order, the AI Litigation Task Force, and this intervention demonstrates that the administration will actively deploy litigation resources against state AI laws it views as impeding innovation or embedding ideological mandates. States that have enacted or are considering AI regulations, and particularly those involving algorithmic fairness, bias auditing, or disparate-impact frameworks, should expect potential federal legal challenges.
  • Equal protection arguments may reshape AI bias regulation. DOJ’s complaint frames algorithmic fairness requirements as constitutionally compelled discrimination, arguing that any regime requiring correction of disparate impacts necessarily forces race and sex conscious decision-making in violation of the Equal Protection Clause. If successful, this theory could have significant implications for AI regulation beyond Colorado, potentially limiting the ability of states and even federal agencies to require demographic balancing in AI systems.
  • Compliance uncertainty persists. SB24-205 remains set to take effect on June 30, 2026, absent injunctive relief. Companies subject to SB24-205’s requirements should continue to monitor both the litigation and the legislative process closely while evaluating their compliance posture. Continuous monitoring is important as some of the arguments raised in connection with SB24-205 address the broad scope and applicability of AI systems that reach even a single state resident.

©2026 Barnes & Thornburg LLP. All Rights Reserved. This page, and all information on it, is proprietary and the property of Barnes & Thornburg. It may not be reproduced, in any form, without the express written consent of Barnes & Thornburg.

This Barnes & Thornburg publication should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own lawyer on any specific legal questions you may have concerning your situation. 

Keep Up to Date in a Changing World

Do you want to receive more valuable insights directly in your inbox? Visit our subscription center and let us know what you’re interested in learning more about.
Subscription Banner