Semi-automatic Image Annotation Methodology for Training Computer Vision Models in Nuclear Safeguards

Year
2024
Author(s)
Asa V. Sudderth - Nevada National Security Sites (NLV Facility)
Abstract
Methods to efficiently annotate images are essential to utilize object detection neural networks in nuclear safeguards to localize target objects. The conventional manual annotation process is labor-intensive and time-consuming, hindering the rapid deployment of deep learning for many use cases. To address this shortcoming, a semi-automatic method for annotating images of nuclear material containers (NMCs) is proposed that uses LiDAR data next to color and depth camera images to simplify and automate the annotation procedure. An operator selects the shape of a new target object in code and manually marks the location and orientation of the objects on a LiDAR-generated map. Geometric models of the container, placed in these locations are projected to images, thereby automatically creating annotations. This approach significantly reduces manual efforts and the required expertise in image annotation, allowing deep-learning models to be trained on-site within a few hours. The presented work compares the performance of models trained on datasets annotated through three different methods: manual annotation, using a commercial annotation service, and applying the semi-automatic approach. The evaluation demonstrates that the semi-automatic annotation method achieves comparable or superior results, with a mean average precision (mAP) above 0.9 in detecting objects with bounding boxes. Additionally, the paper explores the application of the proposed method to instance segmentation, achieving promising results in detecting NMCs in various formations.