Secure Artificial Intelligence Act


A new Bill called the Secure Artificial Intelligence Act has been tabled in the US Senate. The Bill aims to address security vulnerabilities associated with Artificial Intelligence Systems. The proposal is :

  • to create a database of all confirmed or attempted incidents of security attacks on significant AI systems, and
  • to create  a “Security Center” at the National Security Agency (NSA) to engage in security research for AI systems, and
  • evaluate supply chain risks

 Create a database to track vulnerabilities

The Bill called for ‘National Institute of Standards and Technology’  and the ‘Cybersecurity and Infrastructure Security Agency’ to create a “National Vulnerability Database “. This database would be a public repository of all artificial intelligence security vulnerabilities. The database must allow private sector entities, public sector organizations, civil society groups, and academic researchers to report such incidents.

The database would contain all confirmed or suspected artificial intelligence security and safety incidents while maintaining the confidentiality of the affected party. Incidents to be classified in a manner that supports accessibility, the ability to prioritise responses related to concerning models especially those used in critical infrastructure, safety-critical systems and large enterprises.

The Bill also proposed updating the ‘‘Common  Vulnerabilities and Exposures Program’’, which is the current reference guide and classification system for all information security vulnerabilities sponsored by the Cybersecurity and Infrastructure Security Agency.

Establish an Artificial Intelligence Security Centre

The Security Centre established by the NSA must make available a research test-bed, develop guidance on how to prevent “counter-artificial intelligence techniques”.

Evaluate consensus standards and supply chain risks

The Bill also acknowledged the need to update certain practices while considering AI. The Bill also called to “evaluate whether existing voluntary consensus standards for vulnerability reporting effectively accommodate artificial intelligence security vulnerabilities.” The Bill postulates that there may be a need to update the widely accepted standards for reporting security vulnerabilities with the rise of artificial intelligence.

Further, it called to reevaluate best practices concerning supply chain risks associated with training and maintaining artificial intelligence models. These could include risks associated with:

  • reliance on remote workforce and foreign labour for tasks like data collection, cleaning, and labelling
  • human feedback systems used to refine AI systems
  • inadequate documentation of training data and test data storage, as well as limited provenance of training data
  • using large-scale, open-source datasets in the public and private sector developers in the United States
  • using proprietary datasets containing sensitive or personally identifiable information.


Please enter your comment!
Please enter your name here