
Trump Administration Executive Order (EO) Tracker
On 22 September, the UK government’s Department for Digital, Culture, Media & Sport (DCMS) announced its long-awaited National AI Strategy. The strategy paper sets out the government’s intended 10-year agenda for making the UK a “global AI superpower” and includes an acknowledgment of the need to introduce new legislation in order to regulate AI technologies.
The announcement comes only months after the European Union published its own bold and comprehensive proposals for an AI Regulation and in the same month that DCMS also outlined its suggested post-Brexit reforms of UK data protection law.
There are three ‘core pillars’ to the National AI Strategy. The paper sets out how the UK government intends to invest in the long-term needs of the AI ecosystem, how they can ensure that AI technology benefits all sectors and regions, and what steps can be taken to ensure effective AI governance.
The strategy’s section on AI governance sets out the objective of establishing a governance framework that addresses the unique challenges and opportunities of AI, while also emphasising the need to be sufficiently flexible and proportionate. This will be achieved by taking a number of steps:
This more interventionist approach to regulation, and the development of technical standards, contrasts with the government’s previous sector-led strategy, which placed the emphasis on particular regulators such as the FCA, CMA, and ICO to determine the relevant rules and guidelines that should apply in their domains. However, concerns about a lack consistency, the overlapping nature of regulatory mandates, and the move towards developing global cross-sector standards in other jurisdictions has resulted in a change in philosophy.
What remains unclear from the strategy is the precise nature of the obligations that may be imposed on organisations that develop and use AI. Nonetheless, we can deduce from the government’s pro-innovation plan for Digital Regulation, and its recent proposals on reforming data protection law, that any future legislation will likely be principle-based, seek to avoid “box-ticking exercises,” and place the emphasis on companies to determine how they seek to comply in practice. This position has also been supported by DCMS Minister, Chris Philp, who has stated that the government intends to keep “regulatory intervention to a minimum” by using "existing regulatory structures" where possible.
These factors all indicate that there is the potential for significant divergence from the approach that is being taken in the EU.
While the government has not opened an official consultation on the proposals, the National AI Strategy provides a significant opportunity for organisations and individuals to engage with DCMS and seek to influence the UK’s future AI governance framework.
This article is part 5 of a series of articles, which will further examine the existing and emerging legal challenges associated with AI and algorithmic decision-making. We will take a detailed look at key issues including algorithmic bias, privacy, consumer harms, explainability and cybersecurity. There will also be exploration of the specific impacts in industries such as financial services and healthcare, with consideration given to how existing policy proposals may shape the future use of AI technologies.
Authored by Dan Whitehead.