New code of practice for the cyber security of AI development
Developers of AI systems are now encouraged to follow a voluntary Code of Practice (the Code) that sets baseline security requirements, addressing the cyber security risks to AI. The Code will be used to create a global standard in the European Telecommunication Standards Institute (ETSI).
The scope of the voluntary Code of Practice is focused on AI systems. This includes systems that incorporate deep neural networks, such as generative AI. The Code is not intended for academics who are creating and testing AI systems only for research purposes but will apply to those who create AI systems such as bots from generative AI models.
The Code sets out cyber security requirements for the lifecycle of AI and identifies five development phases. These are design, development, deployment, maintenance and end of life. The code focuses on all stakeholders in these phases, namely developers, system operators, data custodians and end users. Covering each phase are a series of 13 principles ranging from ‘raise awareness of AI security threats and risks’ at the beginning of the lifecycle, to ‘securing your supply chain and conduct appropriate testing and evaluation to ensuring proper data and model disposal’ at the end.
Whilst voluntary, the code aims to support the security and safety of AI systems designed, developed and deployed in the UK. It is expected that this code will become more widely adopted if it forms part of the ETSI standard.
Full details of the code can be found here: Code of Practice for the Cyber Security of AI - GOV.UK