Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific
Topics at First-Ever Summit to Include AI Cybersecurity and Nation-State Threats
Cybersecurity and safety risks tied to frontier artificial intelligence will be a key focus for the U.K government’s first-ever global AI summit
See Also: Live Webinar | Cyber Resilience: Recovering from a Ransomware Attack
Unveiling the plan for its AI Safety Summit scheduled Nov. 1-2 at Bletchley Park, Buckinghamshire, the U.K. government said the event will focus on preventing the misuse of emerging AI capabilities that are deemed dangerous enough to pose “severe risks to public safety.”
“We have already seen the dangers AI can pose: teens hacking individuals’ bank details, terrorists targeting government systems, cybercriminals duping voters with deepfakes and bots, even states suppressing their peoples,” Deputy Prime Minister Oliver Dowden said last week. “Our efforts need to preempt all of these possibilities – and to come together to agree to a shared understanding of those risks.
Organizers warned that potential misuse of AI could help nation-state groups or other adversaries execute cyberattacks on target critical infrastructure or develop bioweapons that pose “significant harm” or “loss of life.”
Noting that the capabilities of the technology “are very difficult to predict,” even for the AI model developers, the summit will lead discussions on risks posed by “narrow AI” designed to perform a single task, such as that used for bioengineering of generative AI, the government said in a statement.
The summit represents a move to not just urgently address ways to mitigate AI risks but also to ensure that the government, British academics and businesses have a role in promoting global cooperation in AI developments.
Although Prime Minister Rishi Sunak is eager to turn the U.K into the next AI hub, British lawmakers have called out the government’s slow response to regulate AI. In a letter published last month, lawmakers on the U.K Parliament’s Science, Innovation and Technology Committee said Britain’s interim AI strategy, published in March, could impede the country’s AI development because the government does not plan to introduce any new legislation in the near term. As a result, other jurisdictions, “principally the European Union and the United States,” may well be the ones “to set international standards,” they warned (see: Mitigating AI Risks: UK Calls for Robust Guardrails).
Unless Britain introduces its own legislation, British lawmakers fear that AI standards, governance and enforcement may follow the same path as the EU General Data Protection Regulation. Namely, if the EU articulates its position first, the U.K. may find it “difficult to deviate” if it favors a different approach.