Artificial Intelligence (AI)/Machine Learning (ML)-enabled technology is rapidly being adopted in nearly all industries. New methods for exploiting weaknesses in AI/ML are growing rapidly with research ongoing in developing appropriate mitigations. Due to the rapidly evolving landscape, it can be challenging to assess how resilient these technologies are. The National Cybersecurity Center of Excellence has built Dioptra, an experimentation test platform, to begin to address the broader challenge of evaluation for attacks and mitigations. The test platform aims to facilitate evaluations of AI/ML algorithms under a diverse set of conditions. To that end, it has a modular design enabling researchers to easily swap in alternative datasets, models, evaluation methods, attacks, and defenses. Although the immediate focus has been on the security and resilience of AI/ML, measurement of other trustworthy characteristics can be easily incorporated. The end result is that Dioptra provides a strong foundation to advance the metrology needed to ultimately help evaluate multiple trustworthy characteristics of AI/ML-enabled systems.
Join us on April 20 to learn about the NCCoE’s Dioptra: An AI/ML Test Platform.
- Harold Booth, Computer Scientist, Computer Security Division, NIST
- Paul Rowe, Cybersecurity Engineer, National Cybersecurity Center of Excellence
- Dioptra Documentation: https://nist.gov/publications/securing-ai-testbed-dioptra-documentation
- Dioptra Source Repository: https://github.com/usnistgov/dioptra
Recording Note: Portions of the event may be recorded and audience Q&A or comments may be captured. The recorded event may be edited and rebroadcast or otherwise made publicly available by NIST. By registering for — or attending — this event, you acknowledge and consent to being recorded.