Artificial intelligence (AI) is already having a profound impact on government and society. As AI grows even more pervasive in our lives, we will often be asked to trust AI systems simply for their alluring benefits.
But, how do we know AI is doing its job appropriately?
The rapidly evolving pace of AI makes it necessary to establish a framework to independently verify AI systems (even as the technology continues to advance). The global accountability community needs a toolkit to evaluate this ever-changing technology, and, more importantly, organizations that build, purchase, and deploy AI need a framework to understand how AI systems will be evaluated.
The U.S. Government Accountability Office (GAO), recognizing the urgent need for AI governance, recently published an AI accountability framework designed to help ensure accountability and responsible use of AI in government programs and processes.
“Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities” is a unique, first-of-its kind framework that
- serves as a practical, actionable blueprint for evaluating and auditing AI systems;
- is flexible enough to adapt to evolving technologies;
- enables non-experts to ask the right questions about AI systems; and
- establishes a benchmark to assess safety, fairness, and effectiveness in government-deployed AI systems.
Developed using a truly collaborative approach—uniting experts across the federal government, industry, and non-profit sectors—the framework is organized around four complementary principles addressing (1) governance, (2) data, (3) performance, and (4) monitoring, each of which incorporates key, real-world practices, including questions to ask, audit procedures, and types of evidence to collect.
The framework acknowledges and covers the entire lifecycle of AI systems (Design–Development–Deployment–Monitoring). A few examples of the lifecycle issues that the framework addresses, in addition to areas like data representativeness, bias, and security, are
- assessing AI system expansion or growth;
- porting AI systems from one application space to another;
- establishing procedures for human supervision;
- detecting model drift; and
- establishing performance metrics.
This product is the first of many steps on the AI accountability journey. As GAO marks 100 years of service to the public this year, this robust oversight framework for the AI systems of today and tomorrow ensures GAO is ready for the next 100 years.
Access the full report here.
For more information about the AI Accountability Framework, contact:
- Taka Ariga, Chief Data Scientist and Director, Science, Technology Assessment, and Analytics, email@example.com
- Timothy M. Persons, PhD, Chief Scientist and Managing Director, Science, Technology Assessment, and Analytics, firstname.lastname@example.org
- Stephen Sanford, Managing Director, Strategic Planning and External Liaison, and Director, GAO Center for Strategic Foresight, email@example.com