The Importance of Establishing an AI Incident Reporting System in the UK

It has been recommended by the think tank Centre for Long-Term Resilience (CLTR) that the United Kingdom should implement a framework for documenting instances of misuse and malfunctions in artificial intelligence (AI). According to the CLTR, the failure to do so could result in government ministers being unaware of concerning incidents involving the technology.

The think tank has proposed that the next government establish a system to log AI-related incidents in public services and explore the possibility of creating a central hub to collect AI-related events nationwide. The CLTR has emphasized the necessity of this system, drawing parallels to the Air Accidents Investigation Branch (AAIB) system, which is indispensable for the effective utilization of AI technology.

In a report, the CLTR made reference to a database compiled by the Organisation for Economic Co-operation and Development, which has documented 10,000 AI “safety incidents” since 2014. These incidents, as defined by the OECD, encompass a wide range of issues from physical harm to economic, reputational, and psychological effects.

The think tank highlighted instances of AI safety incidents such as deepfakes, self-driving car incidents, and the influence exerted by a chatbot on an individual intending to harm a public figure. The report’s author, Tommy Shaffer Shane, underscored the transformative role that incident reporting has played in mitigating and managing risks in safety-critical industries such as aviation and medicine.

The CLTR has recommended that the UK government take inspiration from safety-critical industries like aviation and medicine and introduce a “well-functioning incident reporting regime” for AI incidents. It also noted that many AI incidents may not fall under the oversight of existing regulators, particularly those involving cutting-edge AI systems like chatbots and image generators.

The think tank has suggested that the Department for Science, Innovation and Technology (DSIT) prioritize the establishment of a system for reporting AI incidents in public services, identify deficiencies in AI incident reporting among UK regulators, and consider launching a pilot AI incident database.

In summary, the proposal by the think tank for the United Kingdom to implement a system for recording AI incidents is vital in ensuring the safe and responsible use of AI technology in public services. With the increasing integration of AI across various sectors, establishing an incident reporting system will provide valuable insights into potential risks and help coordinate responses to serious incidents. It will also aid in identifying early indicators of large-scale harm that could arise in the future.