Government-wide Policy on AI Issued by White House Budget Office
Vice President Kamala Harris announced a major policy last week that will govern how federal agencies can use artificial intelligence, as directed by President Joe Biden’s November executive order on AI.
The policy, issued by the White House Office of Management and Budget, establishes many security and safety requirements for federal applications of AI but also includes various exceptions. Notably, the policy does not apply to AI used to conduct basic or applied research unless the purpose of that research is to develop AI applications for use by the agency.
The policy defines a set of “minimum practices” that agencies are required to follow when using or creating rights-affecting or safety-impacting AI – that is, AI designed to inform decisions that affect the rights or safety of individuals.
These practices include real-world testing, independent evaluation, public documentation, impact assessments, and other basic software risk-mitigation standards. Agencies are also required to “proactively” share AI-related code they develop, and the policy specifically recommends doing so via the National AI Research Resource.
Agencies have until December 1 to make any AI they are currently using compliant with the policy or halt their use. Agencies can request waivers to allow them to continue using non-compliant AI. Among the remaining provisions, agencies are required to annually update their inventories of AI use-cases and designate chief AI officers.
This news brief originally appeared in FYI’s newsletter for the week of April 1.