A UK government task force to develop safe and reliable artificial intelligence models for sectors such as healthcare and education will receive £100 million (US$124 million) in initial funding.
Foundation models are AI systems trained on large unlabeled data sets. They underpin large language models such as OpenAI’s GPT-4 and Google’s PaLM—for generative AI applications like ChatGPT—and can be used for a wide range of tasks like translating text to analyze medical images.
As the Department for Science, Innovation and Technology (DSIT) announced the funding, AI technology is expected to increase global GDP by 7% over the coming decade, making its adoption a “significant opportunity” for UK growth. The news follows Chancellor Jeremy Hunt’s announcements in his Budget last month, including a new AI research award that will offer £1 million per year to the company that achieves the “most groundbreaking British AI research”.
In addition to an AI sandbox, or testing environment, it’s committed to working with the Intellectual Property Office to help innovators bring innovative products to market and provide clarity on IP rules so generative AI companies can access the necessary components.
DIST said the task force aims to develop safe and reliable use of artificial intelligence to ensure the UK is globally competitive. The team, comprised of government and industry experts, will work with various industry, regulatory and civil society groups to develop safe and reliable baseline models at both the scientific and commercial levels.
“Developed responsibly, cutting-edge AI can radically impact nearly every industry. It could revolutionize how we develop new treatments, tackle climate change and improve our public services while growing and future-proofing our economy,” said Science, Innovation and Technology Secretary Michelle Donnellan in comments released alongside the announcement. He said it is vital for the public and businesses to have the confidence to embrace AI technology and fully realize its benefits.
Managing artificial intelligence
Last month, the UK government released a white paper outlining its plans for regulating general-purpose AI.
The paper sets out guidelines called “responsible use,” which outline five principles companies want to follow:
- Safety, security and robustness
- Transparency and interpretability
- Equity Accountability and Governance
- Countermeasures and remedies
However, to “avoid heavy-handed legislation that could stifle innovation”, the government has opted to hand AI governance over to sectoral regulators, who will have to rely on existing powers without new legislation.
ChatGPT will be investigated by a dedicated task force headed by the European Data Protection Board (EDPB). At the same time, several European privacy watchdogs have raised concerns over whether the technology complies with the EU’s General Data Protection Regulation (GDPR).
In its website, the EDPB said the task force’s purpose was to “enhance cooperation and exchange information on potential enforcement actions undertaken by data protection authorities.”