Last month, NIST released its Draft NISTIR 8269, A Taxonomy and Terminology of Adversarial Machine Learning.
The taxonomy is intended to assist researchers and practitioners in developing a common lexicon around Adversarial Machine Learning, with the goal of setting standards and best practices for managing the security of Artificial Intelligence (“AI”) systems against attackers.
Adversarial Machine Learning refers to the manipulation and exploitation of Machine Learning, defined by the document as “the components of an AI system [that] include the data, model, and processes for training, testing, and validation.” Researchers in this area study ways to design Machine Learning algorithms, or “models,” to resist security challenges and manage the potential consequences of intentional attacks.
NIST’s taxonomy is organized around three concepts that inform a risk assessment of AI systems: attacks, defenses, and consequences. Differing from previous surveys, the draft NISTIR includes “consequences” as a separate dimension of risk, because the consequences of Adversarial Machine Learning attacks depend on both the attacks themselves and the defenses in place, and may not be consistent with the original intent of an attacker.
Adversarial Machine Learning is and will increasingly be a tremendous challenge in securing AI systems. With its new taxonomy, NIST demonstrates continued interest in playing a large role in setting standards and best practices for managing AI security. The public comment period for this draft document is open through Monday, December 16, 2019 and comments may be submitted at https://www.nccoe.nist.gov/webform/comments-draft-nistir-8269.
Read more at: Lexology