Artificial Intelligence/Machine Learning and Trust: A Proposed Methodology for Verifying AI/ML-Generated Data Streams

Mark Schanfein - Idaho National Laboratory
Amanda Rynes - Idaho National Laboratory, BEA

The implementation of artificial intelligence (AI) and machine learning (ML) continues to expand, potentially playing a larger role in IAEA’s evaluation of diversion and misuse scenarios. While these powerful algorithms play an important and expanding role in drawing safeguards conclusions, there are inherent risks borne from the volume and complexity of remotely acquired data streams generated by such AI/ML systems, to include data originating from containment, surveillance, and destructive assay (NDA) systems. Given the remote transmission, the data stream can be subject to cyber-attack and manipulation by an adversary bringing into question the trustworthiness of the data stream and the IAEA’s ability to draw sound and defensible safeguards conclusions. This project applies a simple programming method to verify AI/ML is functioning properly and is providing trustworthy data by testing an algorithm’s decision-making process and, in doing so, provide a blueprint for designers and users a means for validating the authenticity of the data over time.