Mitigation of AI/ML Bias in Context

Automated decision-making is appealing because artificial intelligence (AI)/machine learning (ML) systems produce more consistent, traceable, and repeatable decisions compared to humans; however, these systems come with risks that can result in discriminatory outcomes. For example, unmitigated bias that manifests in AI/ML systems used to support automated decision making in credit underwriting can lead to unfair results, causing harms to individual applicants and potentially rippling throughout society, leading to distrust of AI-based technology and institutions that rely on it.

This project will develop guidance and recommended practices that help promote fair and positive outcomes to benefit users of AI/ML services

NIST developed Special Publication (SP) 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, as part of the AI Risk Management Framework, which proposes a comprehensive socio-technical approach to mitigating bias in AI and articulates the importance of context in such endeavors. This document provides the background for the project.
Status: Reviewing Comments

The public comment period has closed for the draft Project Description, Mitigating AI/ML Bias in Context: Establishing Practices for Testing, Evaluation, Verification, and Validation of AI Systems. Thank you to everyone who shared their feedback with us. We are currently reviewing the comments received as work continues on the implementation of the demonstration and development of other sections of the publication. 

Project Abstract

Managing bias in an AI system is critical to establishing and maintaining trust in its operation. Despite its importance, bias in AI systems remains endemic across many application domains and can lead to harmful impacts regardless of intent. Bias is also context-dependent.

To tackle this complex problem within this project, we are adopting a comprehensive socio-technical approach to testing, evaluation, verification, and validation (TEVV) of AI systems in context. This approach will connect the technology to societal values in order to develop guidance for recommended practices in deploying automated decision-making supported by AI/ML systems in a sector of the industry. A small but novel part of this project will be to look at the interplay between bias and cybersecurity and how they interact with each other. The project will leverage existing commercial and open-source technology in conjunction with the NIST Dioptra, an experimentation test platform for ML datasets and models.

The initial phase of the project will focus on a proof-of-concept implementation for credit underwriting decisions in the financial services sector. We intend to consider other application use cases, such as hiring and school admissions, in the future.

This project will result in a freely available NIST SP 1800 series Practice Guide.

This approach will connect the technology to societal values in order to develop guidance for recommended practices in deploying automated decision-making supported by AI/ML systems in a sector of the industry.