Coalition Comment Letter on GSA’s Draft AI Procurement Clause: “Basic Safeguarding of
Artificial Intelligence Systems”
The comment letter, submitted April 3rd, is in response to the General Services Administration’s (GSA) draft AI procurement clause titled “Basic Safeguarding of Artificial Intelligence Systems.“The updated GSA AI Clause would essentially extend some demands from the Pentagon to a wider array of government contracts, including the “any lawful government purpose” requirement and to require neutral AI tools that are free from ideology such as DEI.
As such the coalition letter makes the following recommendations: On permitted uses, the current “any lawful government purpose” standard is too broad and should be replaced with explicitly enumerated prohibited use cases and protected safety guardrails that cannot be overridden. On bias and civil rights, the so-called “Woke AI” provision should be dropped in its current form and replaced with a concrete civil rights compliance standard. On data protection, existing segregation and training prohibitions should be kept and strengthened, with new protections added for demographic data and mandatory bias auditing requirements. Finally, on surveillance, GSA should assess the risk of government AI configurations spilling over into private sector use, require contractors to certify their systems will not be repurposed for surveillance, and create public notice requirements for any AI monitoring.
Coalition Comment Letter on NIST’s Draft Guidance for AI Evaluations
The comment letter, submitted on March 31st, addressed to the Center for AI Standards and Innovation is in response to NISTs draft guidance on AI evaluations. 18 organizations signed on, including NCLC, Center for AI and Digital Policy (CAIDP), Center for Democracy and Technology (CDT), and the Electronic Privacy and Information Center (EPIC).
The recommendations in the letter center on the following priorities: 1. Incorporate civil rights principles, including antidiscrimination measures such as disparate treatment and disparate impact testing as explicit requirements for AI evaluation frameworks. 2. Encourage and incentivize the development of domain-specific benchmarks, with particular attention to housing, lending, criminal justice, child welfare, education, and employment as priority domains for benchmark development.
