Artificial Intelligence & Machine Learning
,
Cybercrime
,
Fraud Management & Cybercrime
New Turing Institute Report Urges Government to Create AI Crime Task Force

British law enforcement agencies are ill-equipped to tackle artificial intelligence-enabled cybercrime, a report by The Alan Turing Institute says, pointing to an “enormous gap” between police technical capabilities and the growing sophistication of threat actors.
See Also: Top 10 Technical Predictions for 2025
The report, based on responses from 22 experts from government, academia and law enforcement, said the proliferation of large language models such as OpenAI’s ChatGPT and Google’s Gemini, has resulted in the growth of AI-enabled cybercrimes.
Fraudsters relying on synthetic video and audio content to improve the effectiveness of their attacks. For example, deepfake content helped scammers steal 20 million pounds from a Hong Kong-based British multinational company last year, and ransomware hackers using AI for network reconnaissance and smart payload delivery, researchers said.
The easy availability of AI technology has created an “enormous gap between the technical capability of law enforcement in the U.K. and the nature of the problem,” the report found. Some participants expressed concerns about “the police’s ability to understand what is out there, deal with it and utilize AI itself.”
While many of the attacks are still in nascent stages, the emergence of non-Western open-source large language models such as DeepSeek’s R1 and V3 reasoning models could further widen the gap between criminals and defenders, said Ardi Janjeva, senior research associate at the Turin Institute’s Centre for Emerging Technology and Security.
“Western governments have very little leverage over the Chinese open-source ecosystem. There’s kind of far fewer routes in terms of talking to developers building to make quick fixes when certain vulnerabilities have been identified by the national security community. So, I think that’s one of the main concerns,” Janjeva told Information Security Media Group.
To effectively counter rising AI-enabled crimes, the report recommends that the U.K. National Crime Agency create an AI crime task force within its cybercrime unit.
The task force could collect data from U.K. agencies to identify tools used by criminals and respond to AI-enabled crimes swiftly. The report also recommends the U.K. government work closely with European and other law enforcement agencies to deter and prevent criminal adoption of the technology.
“The proposed AI crime taskforce should maintain a new central database for this purpose, working closely with international partners,” the report said.
The government should also cut down bureaucratic red tape that is preventing law enforcement from using AI tools to closely track and pursue serious online criminals, Janjeva said.
Turin Institute researchers shared the report findings with the National Crime Agency and other police agencies, and are working to help strengthen law enforcement’s AI capabilities.
The NCA did not immediately respond to a request for information from Information Security Media Group.