The UK risks losing out on hi-tech growth if it fails to regulate AI

T

Although a vital step, the white paper isn’t sufficient to set the United Kingdom on the route to absolutely taking benefit of AI. White Papers are used by the government to formulate future laws. They act as a platform for further consultation and discussion, allowing changes to be made before the Bill is introduced to Parliament. The country lags far behind its allies on this issue and success is far from assured. 

I will outline what urgently needs to be done to achieve the UK’s ambitions.AI law and monetary increase is synergistic, now no longer at odds. The desire to ensure the use of AI is safe and fair and the aim to stimulate innovation, and productivity and stimulate the UK economy are important complementary aims. Regulation is important not only for public safety but also for promising future economic investments in AI. 

This is because regulation provides certainty for businesses – a framework to structure planning and investment. There are four stages to AI regulation: setting a direction, passing mandatory requirements, turning them into specific technical standards, and then implementing them. 

The white paper is a step in this process. It lists five principles to inform the responsible development of AI, including accountability and the ability to contest decisions. Yet the EU and the US are already in the third step – setting the standards.

 Waking up 

UK corporations and universities are at the vanguard of AI research, and the country has smart and well-respected regulators However, the United Kingdom has been relatively slow to act.  

The UK was faster in getting an agreement between the EU 27 states than it was between two government departments. The US, despite a divided federal government, beat out the United Kingdom for markets.

Prime Minister Rishi Sunak has woken as much the difficulty and is now transferring matters forward. The next steps are to insert the UK into international AI standard-setting and ensure that the country’s regulators are coordinated enough to underpin the growing market in AI assurance – testing and monitoring to ensure that AI systems comply with standards.

Several events in recent years illustrate why trust in technology is important. Several years ago, AI was used in the Netherlands to detect welfare fraud. Larger penalties were applied based on a risk calculation by algorithms. 

The lives of thousands of families have been disrupted. The UK A-level grading fiasco was that exam grades were determined by an algorithm rather than scoring.

About 40% of the scholars scored under the predicted gradeStandard-setting is progressing in the EU and US. The EU is moving towards the passage of its AI Act. 

It will set high-level requirements for all companies selling AI products deemed “high-risk” – those that could affect the health, safety or fundamental rights of an individual.  

This would cover AI used in recruitment, grading tests or healthcare This will also concern AI in security-critical settings like policing or critical infrastructure The EU has already started setting technical standards, a process which should be completed in January 2025. The US started a bit later, but quickly developed a standardization process through the federal agency NIST (National Institute of Standards and Technology).

The EU’s AI Act has a serious “stick” in terms of large penalties. The US version has more carrots: Companies that want to sell to the federal government must follow NIST standards.

Setting the bar 

The EU and the US will set the bar for the rest of the world. The EU is a marketplace of 450 million prosperous purchasers and a dominant “regulatory superpower”.

Other international locations will probably replica the EU policies and worldwide agencies will need to comply with those policies anywhere as it can be simpler and cheaper. The US is a market of 330 million and even more affluent consumers and contains all the big tech companies, so NIST standards will be influential as well. 

The EU and the US each have incentives to arrange a stitch-up. This could be via the EU-US Trade and Technology Council. 

The major international option is the International Organization for Standardization (ISO). China is dominant there, so it may make sense for the US and EU to lock their standards and present them as compliant with ISO. If the UK doesn’t find a place in this process, it could become a rule-taker rather than a rule-maker. 

Instead, the UK should try to broker an arrangement between the EU and the US, so that it is at the table when setting these standards. 

The UK’s recent International Technology Strategy has championed similar ‘interoperable’ technology standards so that the nation’s allies can work together. It will also help UK firms export AI products and services to the US and EU. 

Standardization will create a new multi-billion pound market for “AI assurance”, which the UK wants to capture as much as it can. But it has to move fast, as cities like Frankfurt and Zurich are already in the suit Ducks in a row The UK has world-leading consultants, auditors, lawyers and AI researchers to assess and ensure AI systems are ethical and secure. Yet capitalizing on this business opportunity will require coordination.

Otherwise, various regulators may overlook certain issues, leaving gaps. The reverse can also occur, where different regulators oversee the same issues – creating a confusing overlap. 

A strong central body should oversee the coordination, assess risk and assist regulators. However, the business will need clarity on which regulators cover what, and on what timescale. 

The UK has an opportunity to ensure that international AI regulations and standards benefit the nation, drive economic growth, protect families and create new markets. 

Although it has slowed down from the starting gate, the country can still catch up with the pack and even take the lead.

About the author

Marta Lopez

Add Comment

By Marta Lopez

Get in touch

Content and images available on this website is supplied by contributors. As such we do not hold or accept liability for the content, views or references used. For any complaints please contact adelinedarrow@gmail.com. Use of this website signifies your agreement to our terms of use. We do our best to ensure that all information on the Website is accurate. If you find any inaccurate information on the Website please us know by sending an email to adelinedarrow@gmail.com and we will correct it, where we agree, as soon as practicable. We do not accept liability for any user-generated or user submitted content – if there are any copyright violations please notify us at adelinedarrow@gmail.com – any media used will be removed providing proof of content ownership can be provided. For any DMCA requests under the digital millennium copyright act
Please contact: adelinedarrow@gmail.com with the subject DMCA Request.