Civil Rights Standards are Mandatory in a Federal AI Policy
AI Governance Researcher – Policy Track
While AI systems increasingly carry implications for whether someone gets a mortgage, qualifies for housing assistance, or is denied a government benefit, the evidence is clear. AI products, while holding genuine promise, have caused documented harm to individuals, communities, and workers, harms that often fall on communities that have historically faced discrimination in exactly these domains.[1] The urgency around AI risk is compounded by a fractured policy landscape: Congress has not established a baseline federal framework and 50 state legislatures continue to advance their own AI legislation. As the federal government moves to standardize how it governs AI, it can center its policy on civil rights, ensuring democratic access to the benefits of AI while effectively mitigating its harms. This is why the Responsible AI Lab (RAIL) led the National Fair Housing Alliance’s (NFHA) effort to submit two comment letters to federal agencies, one to the National Institute of Standards and Technology’s (NIST) Center of AI Standards and Innovation (CAISI) and another to the General Services Administration (GSA). Together, the letters reflect a critical moment: federal AI policy is moving fast, and the frameworks drafted right now will shape how AI systems are developed, evaluated, and deployed for years to come.
AI Evaluations Must Measure What Matter
Across the country, state agencies and federal programs are rapidly integrating AI into decisions that shape people’s lives,[2]assistance, public benefits, and public safety.[3]Yet many of these AI models are not ready for mass deployment, resulting in a loss of technical credibility.[4] In light of the need for more standardized AI evaluation practices, NIST is establishing voluntary best practices specifically for the evaluation of language models, so that organizations assessing AI systems are working from a common, scientifically grounded methodology.While NIST’s efforts here are greatly appreciated, it is concerning that the current guidance is rife with omission of civil rights principles.
Given the guidance’s shortcomings, NFHA filed a comment letter with a coalition of civil society organizations including the Center for Democracy and Technology (CDT), the Center for AI and Digital Policy (CAIDP), and UnidosUS. The letter calls for two core changes to the guidance: first, incorporate civil rights principles, including antidiscrimination measures including disparate treatment and disparate impact testing as explicit requirements for AI evaluation frameworks; and second, incentivize the development of domain-specific benchmarks, with particular attention to housing, lending, criminal justice, child welfare, education, and employment as priority domains for AI benchmark development.
Without civil rights principles embedded in the evaluation framework itself, standardized benchmarks will produce rigorous measurements of the most narrow capabilities, giving deployers false confidence that a system is ready for public use when its discriminatory impacts have never been tested. A national AI evaluation standard that incorporates civil rights and invests in benchmarking datasets will increase confidence in AI system capabilities.
Procurement Contracts Cannot Be a Civil Rights Loophole
NFHA was also joined by the ACLU, the Leadership Conference, and others in responding to the GSA’s draft AI procurement clause, “Basic Safeguarding of Artificial Intelligence Systems.” Procurement is one of the most consequential and underappreciated fronts in AI governance. The GSA clause would extend some of the government’s AI requirements to a broader set of government contracts, including “any lawful government purpose” standard and a provision requiring AI tools be free from what the clause characterizes as ideology, including Diversity, Equity and Inclusion (DEI).
NFHA’s letter makes four requests: (i) that the “any lawful government purpose” standard is dangerously broad and should be replaced with explicitly enumerated prohibited uses and protected safety guardrails; (ii) that the so-called “Woke AI” provision should be dropped and replaced with a concrete civil rights compliance standard; (iii) that the existing data segregation and training prohibitions should be strengthened, with new protections for demographic data and mandatory bias auditing; (iv) and that the GSA must assess the surveillance risk of government AI configurations spilling into private sector use, require contractor certification against repurposing for surveillance, and create public notice requirements for any AI monitoring of individuals.
We ask that these requests be granted because the GSA clause, as currently drafted, is so broad that without explicit prohibitions, “lawful” becomes a low floor which offers little protection. Compounding this, the clause’s neutrality provision is technically unenforceable and it could inadvertently breed AI models that violate the very anti-discrimination laws the federal government is obligated to uphold.
Conclusion
Taken together, these letters reflect NFHA’s core position: federal AI frameworks must not be written around civil rights, they must be built on them. NFHA and our coalition partners submitted these letters because we believe the federal government has a genuine opportunity to lead with AI policy. By placing civil rights at the core of AI evaluations and investing in domain specific benchmarks, they have the opportunity to greatly improve the science of AI measurement and foster public trust in AI. The frameworks designed today will determine whether the promises of AI will become a reality or be relegated to a hallucination.
[1] https://airisk.mit.edu/
[2] The New York Times, “When DOGE Unleashed ChatGPT on the Humanities,” March 7, 2026,
https://www.nytimes.com/2026/03/07/arts/humanities-endowment-doge-trump.html.
[4] The Guardian, “Experts Find Flaws in Hundreds of Tests That Check AI Safety and Effectiveness,”
November 4, 2025,