Artificial intelligence is reshaping how governments operate, but a new federal court ruling warns that handing sensitive decisions to chatbots without oversight can cross serious legal lines. On Thursday, U.S. District Judge Colleen McMahon issued a blistering 143-page decision finding that the Department of Government Efficiency's cancellation of over $100 million in National Endowment for the Humanities grants was unconstitutional.
The ruling centers on a startling discovery: DOGE staffer Justin Fox used ChatGPT to scan grant descriptions and determine which projects deserved funding. According to court testimony, Fox submitted each grant summary to OpenAI's chatbot with a rigid prompt: "Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with 'Yes.' or 'No.' followed by a brief explanation." He never defined what "DEI" meant for the AI, and admitted he had no idea how the model interpreted the term.
The results were devastating for the humanities community. Fox and his colleague Nate Cavanaugh eliminated approximately 97 percent of NEH grants, applying what they called "Detection Codes" to flag projects containing words like BIPOC, Indigenous, LGBTQ, immigrant, and minority. Projects covering the Holocaust, civil rights history, and indigenous climate knowledge were all swept into a "wasteful" category and defunded.
Judge McMahon did not mince words. She wrote that DOGE "used the mere presence of particular, protected characteristics to disqualify grants from continued funding," violating constitutional protections against discrimination based on race, national origin, religion, and sexuality. The ruling restores the canceled grants and sends a clear signal that algorithmic shortcuts cannot replace due process in federal decision-making.
Why it matters
This case exposes a dangerous frontier in government automation: the temptation to outsource complex value judgments to generative AI without guardrails. As federal agencies increasingly explore AI tools for efficiency, the decision establishes that constitutional protections apply with full force to algorithmic processes. For technology leaders and policymakers, the lesson is unmistakable: AI can assist government operations, but it cannot replace human accountability, legal standards, or ethical judgment in decisions that affect civil rights.