Deep Learning Techniques To Increase Productivity Of Safeguards Surveillance Review

Year
2021
Author(s)
Maikael Thomas - International Atomic Energy Agency
Stefano Passerini - International Atomic Energy Agency
Yonggang Cui - Brookhaven National Laboratory
Joshua Rutkowski - Sandia National Laboratories
Shinjae Yoo - Brookhaven National Laboratory
Yuewei Lin - Brookhaven National Laboratory
Ji Hwan Park - Brookhaven National Laboratory
Michael Reed Smith - Sandia National Laboratories
Martin Moeslinger - International Atomic Energy Agency
Abstract
Nuclear safeguards inspectors expend significant effort in reviewing video surveillance for safeguards relevant events, especially in complex nuclear facilities with numerous cameras. In some facilities, inspectors spend an average of multiple person-days reviewing one month of video surveillance. To increase efficiency and reduce the time burden of safeguards inspectors performing surveillance data review, we present a novel deep learning-based algorithmic approach. In collaboration with Brookhaven National Laboratory and Sandia National Laboratories in the United States, we use the YOLOv3 deep convolutional neural network to automatically detect, classify, and localize objects of safeguards relevance. As a proof-of-concept, we selected images of spent fuel casks in a variety of reactor and dry storage areas as the target object, and manually prepared and annotated a diverse training/test dataset of 2,384 images composed of surveillance from 23 different cameras installed in five nuclear facilities. All models were trained and tested on a dedicated computer workstation utilizing the processing power of multiple NVidia GeForce RTX 2080 Ti GPU cards. We use the mean average precision (mAP), defined as the area under the precision / recall curve, to evaluate model performance. Using a model trained with a dataset of roughly 1,900 images, we attained a mAP of 92.9% on a dataset of 475 random test images. We further found that we achieved nearly as good results using models generated from smaller training datasets. These initial results show promise for increasing inspector surveillance review productivity for quickly and accurately identifying declared and undeclared safeguards relevant objects in large amounts of video surveillance data. In the future our aim is to integrate generalized models for object detection, classification, localization, and eventually tracking into the IAEA’s Next Generation Surveillance Review (NGSR) tool. Additionally, we have started exploratory research using an unsupervised deep convolutional recurrent neural network to flag unexpected behaviors with promising early results.