Regulating Artificial Intelligence: The Risks and Opportunities

Regulating Artificial Intelligence: The Risks and Opportunities

  • The Prime Minister’s AI Safety Summit at Bletchley Park in early November is an opportunity for the UK to take the lead on AI regulation and signal its openness to industries and sectors developing the next generation of AI
  • While sci-fi narratives about the destructive potential of AI are popular, they are overblown
  • Ahead of the summit, a new report from the Centre for Policy Studies calls for the government to take a ‘grown-up and proportionate attitude’ towards AI regulation to maximise opportunity and minimise risk
  • ‘Regulating Artificial Intelligence: The Risks and Opportunities’ recommends the introduction of a safety charter and prediction markets as consumer-friendly solutions to improving AI safety and alleviating public concern

Far from being the ominous tool of destruction of science-fiction films, AI already exists in many parts of our everyday life – from weather forecasts to our social media feeds. It is welcome that the Prime Minister is seeking to position the UK as a key player in the future of AI – but a new report argues that his AI Safety Summit at Bletchley Park must focus on a small number of targeted interventions to support safe use, not strangle future development before it happens.

‘Regulating Artificial Intelligence: The Risks and Opportunities’, written by CPS Head of Tech and Innovation Matthew Feeney, sets out the current state of AI, highlighting current uses of the technology and dispelling some horror stories about its future.

The report also outlines a ‘Blueprint for Bletchley’, warning that over-regulating new and emerging uses for AI could stifle innovation and damage the Prime Minister’s ambition to make the UK an AI hub.

Instead the report recommends three key interventions:

  • Making AI a cross-government issue – The report endorses the Government’s view that AI should be embedded within regulators across Whitehall, to provide more tailored and sector-sensitive solutions to this emerging technology, rather than establishing a central ‘AI super-regulator’
  • Introducing safety charters – It argues that the best way to manage the risks of AI is for regulators to define the harms they are seeking to prevent and the likelihood of such harms occurring, as well as establishing a set of safety standards
  • Establishing prediction markets – Given the welter of outlandish claims about AI, establishing a set of prediction markets, supported by government, would help consumers, researchers and investors to better understand the risks and opportunities of new AI tools

The report acknowledges the potential harms that AI can cause, and the need for safety regulation, but argues these should be balanced against the huge potential benefits. It argues the UK should lead international partners, taking an approach which ‘tackles the use of technology rather than the technology itself’, for example when dealing with issues such as deepfakes.

The report also recognises the disruptive power of AI for jobs, but argues that we need to invest in helping people through the transition rather than blocking the deployment of technologies that will simply end up being introduced elsewhere.

Centre for Policy Studies - Thursday, 12th October, 2023