The U.K. government will host the world’s first artificial intelligence safety summit in Bletchley Park, the home of the codebreakers who cracked the code that ended World War II.
The government said in a Thursday press release that it would host the meeting — which will convene international governments, leading AI firms and experts in research — to discuss the “safe development and use of frontier AI technology.”
The event will take place on Nov. 1 and 2, the British government said, and will “consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.”
“To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead,” U.K. Prime Minister Rishi Sunak said in a statement on Thursday.
“With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”
It is not yet clear which companies, governments, or researchers will attend the event.
The frontrunners of the AI race are mainly large U.S. tech companies like Microsoft — a prominent backer of ChatGPT creator OpenAI — Google, which owns generative AI chatbot Bard, and Meta, which is responsible for the Llama open-source large language model.
The U.K. is home to several leading companies involved in the research, development and commercial production of AI, including DeepMind, the U.K. AI lab, and Synthesia, a digital media platform that lets users create AI-generated videos.
The renowned Bletchley Park building was the home of the World War II Codebreakers, who in 1941 helped break the secret Enigma Code used by the German government to direct ground-to-air operations on the Eastern front.
The operation, led by English mathematician, computer scientist and cryptographer Alan Turing, managed to decrypt messages on German military strategy. Turing is widely considered to be the father of theoretical computer science and AI.
The U.K. tech sector has been flagging of late, following drops in venture capital investment. Overall capital flowing into the U.K’s tech industry plummeted by 57% in the first half of 2023 to $7.4 billion, according to data from VC firm Atomico.
Britain has been angling itself as a leader in technology globally, launching initiatives to embrace innovation, such as digital currencies, blockchain, and so-called “Web3.”
AI is the latest technology that the country is targeting — and in which it is looking to set global standards. In June, Sunak pitched Britain as the “geographical home of global AI safety regulation.” But the U.K. has a steep hill to climb to compete with major players, such as the U.S. and China.
The U.S. is by far the world leader when it comes to AI, with massive firms ploughing resources into the technology. China has also been deepening its push into AI, with Alibaba, Tencent and Baidu launching their own generative AI chatbots, while Bejiing has already set rules for governing these services.
“The U.K. is well placed to play this role as home to top AI talent and leading companies like Deepmind. We are AI optimists and believe that, with the right guardrails, this technology can be truly transformational for society,” Phelim Bradley, CEO of Prolific, a firm that offers paid surveys to help AI developers train and refine their systems, told CNBC via email.
“For this Summit to be successful, it’s important that enough focus is given to topics including the fair and ethical treatment of AI workers (such as data annotators), the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”
AI is rapidly being applied in areas ranging from healthcare to financial services and cybersecurity. Generative AI algorithms, in particular, pose a number of risks to society, with experts warning of the potential for jobs displacement, misinformation and cyber breaches.