The UK will spend $124 million on a task force for safer AI


A UK government task force to develop safe and reliable artificial intelligence models for sectors such as healthcare and education will receive £100 million (US$124 million) in initial funding.

Foundation models are AI systems trained on large unlabeled data sets. They underpin large language models such as OpenAI’s GPT-4 and Google’s PaLM—for generative AI applications like ChatGPT—and can be used for a wide range of tasks like translating text to analyze medical images.

As the Department for Science, Innovation and Technology (DSIT) announced the funding, AI technology is expected to increase global GDP by 7% over the coming decade, making its adoption a “significant opportunity” for UK growth. The news follows Chancellor Jeremy Hunt’s announcements in his Budget last month, including a new AI research award that will offer £1 million per year to the company that achieves the “most groundbreaking British AI research”.

In addition to an AI sandbox, or testing environment, it’s committed to working with the Intellectual Property Office to help innovators bring innovative products to market and provide clarity on IP rules so generative AI companies can access the necessary components.

DIST said the task force aims to develop safe and reliable use of artificial intelligence to ensure the UK is globally competitive. The team, comprised of government and industry experts, will work with various industry, regulatory and civil society groups to develop safe and reliable baseline models at both the scientific and commercial levels.

“Developed responsibly, cutting-edge AI can radically impact nearly every industry. It could revolutionize how we develop new treatments, tackle climate change and improve our public services while growing and future-proofing our economy,” said Science, Innovation and Technology Secretary Michelle Donnellan in comments released alongside the announcement. He said it is vital for the public and businesses to have the confidence to embrace AI technology and fully realize its benefits.

Managing artificial intelligence

Last month, the UK government released a white paper outlining its plans for regulating general-purpose AI.

The paper sets out guidelines called “responsible use,” which outline five principles companies want to follow:

  • Safety, security and robustness
  • Transparency and interpretability
  • Equity Accountability and Governance
  • Countermeasures and remedies

However, to “avoid heavy-handed legislation that could stifle innovation”, the government has opted to hand AI governance over to sectoral regulators, who will have to rely on existing powers without new legislation.

ChatGPT will be investigated by a dedicated task force headed by the European Data Protection Board (EDPB). At the same time, several European privacy watchdogs have raised concerns over whether the technology complies with the EU’s General Data Protection Regulation (GDPR).

In its website, the EDPB said the task force’s purpose was to “enhance cooperation and exchange information on potential enforcement actions undertaken by data protection authorities.”

About the author

Marta Lopez

Add Comment

By Marta Lopez

Get in touch

Content and images available on this website is supplied by contributors. As such we do not hold or accept liability for the content, views or references used. For any complaints please contact Use of this website signifies your agreement to our terms of use. We do our best to ensure that all information on the Website is accurate. If you find any inaccurate information on the Website please us know by sending an email to and we will correct it, where we agree, as soon as practicable. We do not accept liability for any user-generated or user submitted content – if there are any copyright violations please notify us at – any media used will be removed providing proof of content ownership can be provided. For any DMCA requests under the digital millennium copyright act
Please contact: with the subject DMCA Request.