Evaluating The Cognitive Impacts Of Errors From Analytical Tools In The International Nuclear Safeguards Domain

Year
2020
Author(s)
Zoe N. Gastelum - Sandia National Laboratories
Laura E. Matzen - Sandia National Laboratories
Mallory C. Stites - Sandia National Laboratories
Michael P. Higgins - Sandia National Laboratories
Breannan C. Howell - Sandia National Laboratories
Abstract

In the field of international nuclear safeguards, the quantity of data that could support verification of a state’s peaceful nuclear energy program is growing at a rate which makes it infeasible for analysts and inspectors to review all the potentially relevant information without the aid of algorithms. As is the trend in many domains, international safeguards analysts, inspectors, and other practitioners are exploring the use of machine learning and deep learning (MLDL) to support their work. Current research trends in the field of MLDL for international safeguards include detection of safeguards-relevant objects in open source images, anomaly detection from safeguards sensors that are deployed within facilities, and multisource data integration across text, images, video, and sensor data. Due to the international repercussions of safeguards verification conclusions, the International Atomic Energy Agency’s Department of Safeguards has said that they have zero tolerance for false negatives regarding performance of some MLDL capabilities. Despite improved performance in many MLDL capabilities over the last decade, and especially the last few years, MLDL model performance will never be perfect. Nor will human analyst or inspector performance ever be perfect. As such, we must try to understand the impact of MLDL errors on human users to overcome limitations of both human and algorithms to improve human-algorithm system performance. In this paper, we will describe our current work to evaluate the cognitive impact of errors from computer algorithms on their human users. We will describe recent relevant research on implementation of MLDL algorithms and user trust, our multi-stage research approach to evaluate cognitive impacts of MLDL errors on users across several international safeguards use cases, and our initial results.