Artificial Intelligence: Adversarial Machine Learning

Machine learning (ML), a field within artificial intelligence, focuses on the ability of computers to learn from provided data without being explicitly programmed for a particular task. Adversarial machine learning (AML) is the process of extracting information about the behavior and characteristics of an ML system and/or learning how to manipulate the inputs into an ML system in order to obtain a preferred outcome.

A taxonomy of concepts and terminologies to help your organization secure applications of AI.

NISTIR 8269, A Taxonomy and Terminology of Adversarial Machine Learning was developed as a step toward securing AI applications, and features a taxonomy of concepts and terminologies specific to AML. This NISTIR can inform future standards and best practices for assessing and managing ML security by establishing a common language and understanding of the rapidly developing AML landscape.

Project Abstract

Although AI includes various knowledge-based systems, the data-driven approach of ML introduces additional security challenges in training and testing (inference) phases of system operations. AML is concerned with the design of ML algorithms that can resist security challenges, studying attacker capabilities, and understanding consequences of attacks. 

This NCCoE guidance develops a taxonomy of concepts and defines terminology in the field of AML. The taxonomy builds on and integrates previous AML survey works, and is arranged in a conceptual hierarchy that includes key types of attacks, defenses, and consequences. The terminology defines key terms associated with ML component security in an AI system.  

Taken together, the terminology and taxonomy are intended to inform future standards and best practices for assessing and managing the security of ML components, by establishing a common language and understanding of the rapidly developing AML landscape. 

Taken together, the terminology and taxonomy are intended to inform future standards and best practices for assessing and managing the security of ML components, by establishing a common language and understanding of the rapidly developing AML landscape.