The United Kingdom and the United States have collaboratively developed new global guidelines for AI security, with the endorsement of agencies from 18 countries. Titled ‘Guidelines for Secure AI System Development,’ the guidelines have been spearheaded by the UK’s National Cyber Security Centre (NCSC) in partnership with the US’s Cybersecurity and Infrastructure Security Agency (CISA), as well as industry experts and international agencies from around the world, including members of the G7 group of nations and the Global South. This development represents a significant advancement in ensuring the secure development and deployment of AI technology, focusing on elevating cyber security levels and integrating security as a fundamental requirement throughout the development process. The official launch of the guidelines will occur at an event hosted by the NCSC, bringing together key industry, government, and international partners for a panel discussion on securing AI.
Lindy Cameron, CEO of NCSC, emphasized the importance of unified international efforts to keep pace with the rapid advancement of AI and stressed that the new guidelines represent a pivotal step in establishing a common understanding of cyber risks and mitigation strategies related to AI. The publication of the guidelines is also celebrated as a collective commitment by governments worldwide to ensure the secure development and deployment of AI capabilities.
These guidelines are the result of a collaborative effort to address the issue of integrating security into AI systems in the future, underscoring the necessity of incorporating security into AI systems during their development. This joint endeavour reflects a dedication to promoting transparency, accountability, and secure practices in the progression of AI technology.
According to Science and Technology Secretary Michelle Donelan, the release of the new guidelines by the NCSC will place cyber security at the core of AI development and ensure protection against risks at every stage. This initiative comes shortly after the first international agreement on safe and responsible AI, reinforcing the UK’s position as an international advocate for the safe use of AI.
Secretary of Homeland Security Alejandro Mayorkas highlighted the historical significance of the jointly issued guidelines, describing them as a sensible approach to designing, developing, deploying, and operating AI with cyber security at its foundation. He underscored the importance of safeguarding consumers at each phase of a system’s design and development, as well as the potential for global action to lead the world in leveraging the benefits and addressing the potential harms of AI technology.
The guidelines encompass four key areas – secure design, secure development, secure deployment, and secure operation and maintenance, with recommended practices to enhance security. The full list of international signatories includes agencies from Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, Republic of Korea, Singapore, the UK, and the US. The guidelines can be accessed on the NCSC website, along with a blog post from key NCSC officials who contributed to the publication.
+ There are no comments
Add yours