Agile and flexible: The UK government’s approach to AI regulation
The UK Government has said it favours an agile approach to the regulation of AI in its response to a White Paper consultation on the subject.
It claims that taking such an approach will allow regulators to balance risk and innovation, stay on top of the competition from other nations, and lead in safe, responsible AI research.
Acknowledging the rapidly evolving nature of the technology, the government said it will not rush to legislate AI as “quick fix” rules would quickly become outdated.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: “AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
As part of its drive for more adaptable regulation, the government has announced a multi-million pound cash injection to upskill regulators.
The £10m fund will allow regulators to develop cutting-edge research and tools to address the risks and opportunities in sectors like healthcare and education.
The announcement comes on the back of various organisations already taking on board different policies from the White Paper.
For instance, the Information Commissioner’s Office has updated guidance on how UK data protection laws apply to AI systems that process personal data.
Julian David, chief executive for techUK, said: “We now need to move forward at speed, delivering the additional funding for regulators and getting the central function up and running. Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system, and assessing the immediate risks.
“If we achieve this the White Paper is well placed to provide the regulatory clarity needed to support innovation, and the adoption of AI technologies, that promise such vast potential for the UK.”
Looking to boost transparency and trust, the UK Government has also asked regulators, including Ofcom and the Competition and Markets Authority, to publish their approach to managing the technology by the end of April.
These documents will outline any AI-related risks in their areas, their current skillset and expertise to address them, and a plan for how they will regulate AI over the coming year.
The UK Government has also set out its initial thinking for future binding requirements to ensure developers building the most advanced AI systems are accountable for making these technologies sufficiently safe.
Its response also comes as the UK Government allocated £100m towards AI research so the UK remains a global leader in the sector.
So, what did the consultation say?
Overall feedback on the White Paper was supportive, with over half of respondents agreeing that if “implemented effectively” it would tackle the “key risks” posed by AI.
However, a third of respondents argued that additional, targeted statutory measures would be necessary to implement the framework effectively, with some concerned principles would not be “sufficiently enforceable,” given the “lack of statutory backing”.
There was also a call for enhanced transparency and liability with a suggestion for an additional regulatory framework to clarify legal responsibilities relating to AI.
They also stressed the framework should be regularly monitored and evaluated, with details on how data would be collected and used to measure success.
Respondents called for third-party verification of AI models through bias audits, consumer labelling schemes, and external certification against technical standards.
And they emphasised the need for international agreements to establish effective routes for AI-related harms across borders.
Around a third believed training and education would help organisations apply the White Paper to everyday activities.
A majority of respondents agreed that delivering the proposed functions centrally would benefit the AI regulation framework.
Respondents were divided on whether the paper balanced innovation and economic growth with migration and risks.
...and what did the government say...
To tackle concerns over transparency, the UK Government said it would update its guide, Emerging processes for frontier AI safety, by the end of the year.
Also, following the successful pilot of the Algorithmic Transparency Recording Standard (ATRS) – which helps public sector organisations provide information on the algorithmic tools they use in decision-making – and the publication of a recent approval of a cross-government version, it will now require all government departments to follow the ATRS and plans to expand this across the broader public sector over time.
In terms of accountability, as mentioned before, the government is evaluating introducing targeted binding requirements on developers of highly capable general-purpose AI systems.
However, it will also continue to consider new measures to fairly distribute legal responsibility to those in the life cycle best able to mitigate AI-related risks.
The UK Government believes a non-statutory approach offers “critical adaptability,” which will allow for assessments of the strategic approaches that the government has asked regulators to publish by April.
The government also said it would establish a steering committee by spring to support the exchange and coordination of AI governance and avoid guidance from overlapping, duplicating, or contradicting itself.
Reacting to concerns about the risks posed by the quick development of AI, the government revealed it would “conduct targeted engagement” on its cross-economy AI risk register later this year.
Finally, noting respondents worried about the correct use of data, metrics, and sources to monitor the framework, the government said the upcoming proposed plan to assess the framework would be subject to further consultation in spring.
Holyrood Newsletters
Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe