The Consolidated Appropriations Act of 2023 was signed into law in December 2022, providing more funding for artificial intelligence and machine learning initiatives across the federal government than ever before.
In the below article, Perkins Coie covers which agencies received funding.
The mammoth $1.7 trillion Consolidated Appropriations Act of 2023 (the Act) was signed into law by President Biden on December 29, 2022. It includes record levels of funding for a wide range of Artificial Intelligence (AI) and Machine Learning (ML) initiatives across the federal government. The Act also directs several federal agencies to help ensure the responsible use of AI technologies, including the prevention of algorithmic bias.
AI Research and Development Initiatives
The Act invests in AI innovation by allocating over $1 billion to AI research and development efforts across a wide variety of federal government agencies as follows:
- National Science Foundation (NSF). $686 million for AI-related grants and interdisciplinary research initiatives, including the ethical and safe development of AI and continued expansion of the National AI Research Institutes. As part of its funding, the Act directs NSF to continue its efforts in workforce development for AI and other emerging technologies, including education programs for non-computer science students, with focused outreach to minority-serving institutions.
- National Institutes of Health (NIH). $135 million to support building NIH’s ML capacity to accelerate the pace of biomedical innovation, including $50 million dedicated to general AI- and ML-focused investments and $85 million dedicated specifically to the Office of Data Science Strategy. The Act notes that the $70 million portion of this funding directed towards the Cures Acceleration Network will allow it to expand its existing efforts in AI- and ML-enabled chemistry for drug development. Further, the Act directs the Office of Portfolio Analysis to invest its additional $3 million in funding towards developing AI-based analytical tools to help NIH optimize investments in biomedical research by identifying emerging topics and potentially transformative breakthroughs.
- National Institute of Standards And Technology (NIST). $35 million for AI research and measurement science efforts, including developing “resources for government, corporate, and academic uses of AI to train and test systems, model AI behavior, and compare systems.” As part of its funding, the Act encourages NIST to continue to improve its Facial Recognition Vendor Test.
- Department of Homeland Security (DHS). $10 million for Customs and Border Protection to develop AI- and ML-capable nonintrusive inspection systems and an additional $500,000 for the DHS’ Artificial Intelligence Technology Center. The Act also generally directs Customs and Border Protection to use part of its $252 million allocation for targeting funding towards buying AI tools to improve its operations.
- National Oceanic and Atmospheric Administration (NOAA). $5 million to develop AI systems and optimization of software to support preprocessing of dense observation data sets so extraction of the most useful information will be included in data assimilation for model initialization.
- Department of Justice (DOJ). $5 million for the Office of Justice Programs to partner with universities to develop a training regimen with AI and virtual reality for local and state law enforcement.
Beyond these specific fund allocations, the Act also directs other agencies to use parts of their general funding towards AI-related research and development initiatives.
- Department of Health (DOH). The Act directs the Administration for Strategic Preparedness and Response to provide a report within 120 days on the feasibility of creating an AI-enabled Pandemic Preparedness and Response Program to help protect against biothreats and develop capabilities for accelerated vaccines, rapid therapeutics, global biothreat surveillance, and rapid fielding.
- NASA. The Act directs NASA to use an unspecified amount of the $1.2 billion allocated to its Space Technology Mission Directorate towards the development of AI technologies.
- Department of Education (DOE). The Act encourages the DOE to support higher education institutions in developing degree programs and other opportunities that increase student employability by developing their knowledge of and skills in AI technologies.
Responsible AI Initiatives
In addition to funding AI research and development, the Act supports the federal government’s continued development of approaches and principles for managing risks associated with AI and preventing algorithmic bias. These initiatives will supplement recent efforts by federal government agencies to develop risk management frameworks for the possible risks posed by AI technologies, including the White House’s Blueprint for an AI Bill of Rights and the U.S. Department of Defense’s published policy related to Responsible AI.
- National Institute of Standards and Technology (NIST). As part of its funding, the Act directs NIST to continue the multistakeholder process of developing its AI Risk Management Framework related to the reliability, robustness, and trustworthiness of AI systems and provide an update on its progress to the committees as soon as is practicable. The Act also encourages NIST to explore ways to understand, measure, and manage algorithmic bias in AI systems and provide technical guidance for how organizations might test algorithms against bias prior to adopting their use.
- National Science Foundation (NSF). The Act encourages NSF to partner with nongovernmental organizations, academic institutions, and other federal agencies to research algorithmic bias in AI systems and its effects on decisions related to employment, housing, and creditworthiness and to develop methods, tools, and programs for addressing algorithmic bias.
- Department of Health and Human Services (HHS). The Act directs HHS to conduct or support research related to the health and developmental effects of media and related technology on children, including the effects of AI.
Together with the 2023 National Defense Authorization Act, the 2023 Appropriations Act allocates more funds towards AI and directs more government agencies to get involved with AI than ever before. It builds on the AI investments in the 2023 National Defense Authorization Act and the 2022 Appropriations Act.
The law also reflects the strongest commitment yet in an omnibus spending package to responsible AI initiatives. Stakeholders interested in federal funds for developing AI technologies or selling AI technologies to the federal government should anticipate an increased focus on the responsible development of automated systems and specifically take note of NIST’s AI Risk Management Framework and the White House Blueprint for an AI Bill of Rights.