Skip to main content

Regulation of Artificial Intelligence (AI) Under a Labour Government

Regulation of Artificial Intelligence (AI)  Under a Labour Government

Well, its polling day.  By the end of today, we will know the outcome of the election, and it may mean a new Government with different views on various important societal matters.  Whilst some critics of this year’s general election campaign might argue that the main parties are two sides of the same coin, this cannot be said regarding the parties approaches to AI. During their time in government, Sunak’s Conservatives have implemented a somewhat “hands off” approach to the regulation of AI technologies. Any regulation to date has been sector specific, with the government relying on the existing powers of regulators (e.g., the CMA, CFA, ICO and Ofcom) to address the harms of these technologies as they materialise. Any regulation is supported and informed by the newly established AI Safety Institute. Comprised of AI experts across leading academic institutions and technology developers, the Institute conducts safety assessments on current AI-models voluntarily donated by developers to better understand the technology, its risks and enhance the country’s regulatory approach. 

Supporters of the government’s framework argue that this flexible, but informed, approach to regulation allows British innovation to flourish within this new, dynamic, and growing sector.  However, others, such as Starmer’s Labour, state the newly realised harms that accompany the use of AI technology (e.g., privacy violations, misinformation, deepfakes, etc) point towards a more pressing need to implement specific legislation to protect the public interest. With the party odds-on favourites to imminently take office with bookies and pollsters alike, thought will be given to the future of regulation of AI under Labour.

AI REGULATION 

The House of Commons Science, Innovation and Technology Committee recently encouraged the next incoming government to:  

‘be ready to introduce new AI-specific Governance of artificial intelligence (AI) legislation, should the current approach based on regulators’ existing powers and voluntary commitments by leading developers prove insufficient to address current and potential future harms associated with the technology.

Heeding the above warning, Labour have proposed a two-pronged approach to AI-regulation in their manifesto:

AI-Specific Legislation

Whilst little is currently known about the detailed intricacies of their proposed regulations, Labour have undertaken in its manifesto to ‘ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models.’ Contrasted to the Conservatives, Starmer’s Labour seem to be approaching the regulation of AI in a manner more akin to the UK’s neighbours in the EU and across the pond in the US. The EU’s AI Act (the world’s first comprehensive regulation of the technology) establishes binding safety rules and standards on developers depending on the risk-level of their models. Whilst the act prohibits the development of the most dangerous models to the public interest, models that may foreseeably affect the safety or fundamental rights of individuals (high-risk) will be subject to safety assessment to ensure compliance. 

Labour has previously signalled that they are similarly inclined to regulate in this manner. Regarding the aforementioned AI Safety Institute, Labour has suggested that they would seek to place the voluntary testing agreement – in which AI developers have the option to provide the organisation with access to their models for safety testing on a voluntary basis - on a statutory footing, obliging larger companies to share requested data with the Institute.  This measure would address one of the main criticisms levelled at the expert body thus far, namely that the models they test are released to the public before they are properly examined. Should the Institute have the authority to request test data of models before deployment, regulators will become better informed upcoming challenges in the pipeline and ensure a suitable regulatory framework is in place before models launch.

The Regulatory Innovation Office

When devising regulation to meet these challenges, Labour propose that it will transfer regulatory planning to one central and newly created body, the Regulatory Innovation Office. Anticipating that the current reliance on individual regulator’s experience, power and capacity will prove insufficient in the face of the novel harms of AI and other emerging technologies, the body will be charged with finding broad-based solutions to address such matters. 

Historical evidence alludes to the necessity of this body, whether hate speech on social media, investment scams perpetrated through de-regulated cryptos and now the harms posed by AI, government action to protect the public from the more damaging aspects of emerging technologies has always felt behind the curve, unable to keep pace with rapid developments. Through the creation of such a body, focusing closely on developing technological innovations, one hopes regulators will be better placed to identify and respond to these emerging issues, through improved coordination and faster approval times. 

INNOVATION

Should the above measures indeed be implemented, the question undoubtedly remains, whether such action can be taken whilst simultaneously championing growth in the sector. Labour seeks to address this in their manifesto, promising support to the private sector by removing red tape that could prevent or delay the construction of datacentres within the UK. The party also seek to establish a National Data Library, which will pool the nations existing research data to help scientists and developers advance technological innovation in the area. Labour anticipates these measures will not only benefit private companies and developers, but will also provide the government ample opportunity to provide its citizens with effective “data-driven public services.” 

By tomorrow morning, we will know whether any of Labour’s manifesto promises are chosen by the people to form the agenda of their next government. However, should Starmer’s Labour indeed take office, expect the regulation of AI technology to be high on the agenda.

Related services

About the authors

Frankie Cusack
Frankie Cusack

Frankie Cusack

Trainee Solicitor

Commercial Real Estate

Loretta Maxfield
Loretta Maxfield

Loretta Maxfield

Partner

Data Protection & GDPR, Intellectual Property

For more information, contact Frankie Cusack or any member of the Commercial Real Estate team on +44 131 240 0719.