Year
2023
File Attachment
finalpaper_379_0512065638.pdf917.64 KB
Abstract
In the past decade, the International Atomic Energy Agency (IAEA) has recognized that artificial
intelligence (AI) can provide important benefits for international nuclear safeguards through improved
scope and performance, while reducing costs and manpower. But as AI is used in domains already
governed by existing regulations, its uses for nuclear safeguards verifications will also have to be
carefully conducted in line with regulatory requirements. The International Atomic Energy Agency
(IAEA) has already been engaged in activities using AI for safeguards verification purposes and has
expressed interest in further using this technology. A better understanding of the regulatory landscape
for AI and how it may impact on nuclear safeguards verifications is imperative for determining
whether these activities could potentially result in damage to the IAEA’s institutional interests and
those of its Member States. The open-source country research conducted for this paper illustrates that
in majority of jurisdictions, AI legislation and regulations have just recently started to be developed
and that, in several instances, AI applications may conflict with the existing frameworks governing
data protection and privacy, patent and copyright laws, as well as anti-discrimination policies.
Therefore, the objective of this paper is threefold: 1) provide an overview of the AI legislative and
regulatory landscape in several selected States; 2) identify certain potential legal risks and challenges
pertaining to the use of AI for IAEA safeguards applications; 3) provide a series of recommendations
for IAEA Member States to address these challenges.