Year
1987
Abstract
A key issue in designing and evaluating nuclear safeguards systems is how to validate safeguards effectiveness against a spectrum of potential threats. Safeguards effectiveness is measured by a performance indicator such as the probability of defeating an adversary attempting a malevolent act. Effectiveness validation means a testing program that provides sufficient evidence that the performance indicator is at an acceptable level. Traditional statistical techniques are useful in designing a testing program when numerous independent system trials are possible. However, within the safeguards environment, many situations arise for which traditional statistical approaches may be neither feasible nor appropriate. Such situations can occur, for example, when there are obvious constraints on the number of possible tests due to operational impacts and testing costs. Furthermore, these tests are usually simulations (e.g., staged force-on-force exercises) rather than actual tests, and the system is often modified after each test. Under such circumstances, it is difficult to make and justify inferences about system performance by using traditional statistical techniques. In this paper, we discuss several alternative quantitative techniques for validating system effectiveness. The techniques include: (1) minimizing the number of required tests using sequential testing; (2) combining data from models inspections and exercises using Bayesian statistics to improve inferences about system performance; and (3) using reliability growth and scenario modeling to help specify which safeguards elements and scenarios to test.