Geo Focus: The United Kingdom
,
Geo-Specific
,
Standards, Regulations & Compliance
PM Starmer Calls Sexualized Deepfakes and Revenge Porn a ‘National Emergency’

The United Kingdom inched closer to requiring tech firms to scan all user-uploaded images, as Prime Minister Keir Starmer said sexualized deepfakes and so-called revenge porn constituted a “national emergency.”
See Also: How Payment Service Directive (PSD2) is Changing Digital Banking – Are You Ready?
Ofcom, enforcer of the Online Safety Act, said Wednesday that it would soon announce a decision on a proposed requirement for websites and apps to deploy “hash matching” technology to spot deepfakes and other intimate imagery that has been shared without consent.
The British communications regulator ran a consultation on the proposal last year, in which many responses warned of privacy risks. Ofcom was expected to unveil its decision in the coming fall, but has now moved that event forward to May. “Given the urgent need for better online protections for women and girls – who are disproportionately affected by non-consensual intimate image abuse – we have decided to accelerate our timeline,” it said in a statement.
Starmer said in a Thursday article for The Guardian that his government was “designating creating or sharing these images as a priority offense under the Online Safety Act, so they are treated with the seriousness they deserve.”
“We are putting tech companies on notice: any non-consensual intimate image that is flagged must be taken down within 48 hours,” the prime minister wrote. “We must create a system where a victim reports once, and it’s removed everywhere, on multiple platforms and automatically deleted if it is reuploaded.”
The U.S.’s Take It Down Act, which became law last May, also requires non-consensual intimate imagery to be taken down within 48 hours of the victim’s request. Tech firms have until May this year to have the necessary reporting procedures in place. Another U.S. bill known as the Defiance Act, which would allow victims to sue the creators for civil damages, cleared the Senate in January.
The U.K.’s Online Safety Act comes with much higher penalties for platforms that allow the creation and sharing of non-consensual sexual images, potentially stretching to 10% of global annual revenues and even the blocking of platforms altogether. The law does not currently apply to artificial intelligence chatbot providers – which is why Ofcom’s ongoing probe targets X and not the standalone version of Grok – but Starmer said on Monday that this “loophole” would be closed.
His moves this week are the latest examples of the rapidly spiraling aftershocks of X’s decision to let users command its Grok AI tool to create fake nudes of adults and children. Although it claims to have since introduced safeguards, Elon Musk’s social network is now under investigation by criminal prosecutors in France and by online-content and data-protection regulators in the European Union and the U.K. as well as Spain’s government intends to make platform proprietors such as Musk personally liable for the spreading of illegal content (see: Elon Musk’s AI Bot Snared in New Irish, European Probes).
At least 1.2 million children across 11 countries “disclosed having had their images manipulated into sexually explicit deepfakes in the past year,” UNICEF reported earlier this month following a survey that it conducted alongside Interpol and child safety campaigners.
The hash matching measures that Ofcom is considering would be useful for spotting images that have previously been flagged as illegal, to quickly stop people from reuploading them after an initial removal. The technique involves computing a string of characters – the “hash” – that identifies a unique image, then storing it in a database for proactive comparison with the hashes of subsequently uploaded images.
The regulator reckons this is the right approach for spotting not only child sexual abuse material – it has been recommending it in this context for years – but also intimate image abuse content and terrorism content. It is specifically advocating so-called perceptual hash matching that can find similarities between uploaded images and known illegal images, as opposed to cryptographic hash matching, which only detects exact matches.
Perceptual or “fuzzy” hash matching is by no means a new concept – it is the same approach employed by Microsoft’s widely-used PhotoDNA, Cloudflare’s CSAM scanning tool, Meta’s Pretty Darn Quick algorithm and Apple’s NeuralHash. Nonetheless, Ofcom’s consultation on the matter last year triggered skepticism from digital rights activists, tech industry lobbyists and data-protection regulators alike.
The U.K. Information Commissioner’s Office called on Ofcom to “clarify its evidence on the availability of proactive technologies that are accurate, effective and free from bias for all the harms in scope of the measures… to minimize the risk of services deploying inaccurate or ineffective proactive technologies that may be incompatible with the requirements of data protection law.”
“Automation carries inherent risks to the rights and freedoms of individuals, particularly when the processing is conducted at scale,” the ICO said.
Similarly, European Digital Rights, a coalition of civil and human rights organizations, warned that hash matching would “inevitably lead to false positives” due to a lack of reliability, and “broad monitoring would capture very little harmful content while exposing all users to privacy risks.” It also highlighted the fact that hash matching is not useful for spotting newly generated illegal images, and called on Ofcom to “fully and unambiguously exclude encrypted file sharing services from scanning duties.”
EDRi also said Ofcom’s proposal could force EU service providers operating in the U.K. to carry out “indiscriminate monitoring of content” that would clash with EU law’s prohibiting general monitoring. “In turn, EU operators may be faced with the need of closing their products to U.K. customers to avoid falling under the scope of [Online Safety Act] obligations, or be in breach of EU law should they offer services to EU customers while breaching their privacy rights through general monitoring,” it wrote.
The Computer and Communications Industry Association argues that smaller service providers would find it very difficult to comply with the proposed requirements, because they lack access to databases of “confirmed violative content.”
The lobbying outfit claimed that, while there are long-established databases of known CSAM, there are no “equivalent centralized resources and reporting structures” for categories such as non-consensual images. “This absence of organized, vetted databases and standardized reporting mechanisms makes the development of robust detection technology impossible for many services,” it warned. “An assumption that services already using certain types of hash matching can easily expand to additional harm types risks a fundamental misunderstanding of content moderation technology and operational requirements.”
Meta, TikTok and X are already listed partners of StopNCII.org, which provides a hash matching tool based on the PhotoDNA and PDQ algorithms.
Security researchers found in late 2024 that perceptual hashing systems are vulnerable to so-called inversion attacks that allow people to reconstruct approximations of sensitive images from their stored hashes. As a result, they recommended high data-protection requirements for the relevant databases.
