Government Says Online Spaces Aren’t a Lawless Zone
LONDON — The UK government has issued a sharp warning to social media and tech companies: remove non-consensual intimate content — including deepfake pornography and revenge images — from your platforms within 48 hours, or face severe penalties.
Under proposed changes to the Crime and Policing Bill, tech firms that fail to act within that deadline could be fined up to 10% of their global revenue or see their services blocked in the United Kingdom. Victims would have to flag harmful content only once, with platforms then required to remove it across all their services and prevent future uploads.
A “National Emergency” on Online Abuse
Prime Minister Keir Starmer has framed the escalation of online image abuse as part of a broader crisis in violence against women and girls, calling the surge in harmful deepfake and revenge content a “national emergency.”
In a piece published earlier this week, Starmer wrote that transforming the internet into a safer space requires tech companies to take responsibility for the widespread circulation of these images, which often cause lasting trauma to victims.
Tech Firms Under Pressure — From X to AI Tools
The UK’s announcement follows criticism of tools such as Elon Musk’s Grok AI, which was found to generate “nudified” images of women and girls in response to user prompts.
Ofcom, the UK’s communications regulator, is also moving to expand its powers to enforce tech companies’ compliance with online safety laws and is considering measures like automatic image detection to help identify and block harmful material.
A Clear Message on Accountability
Government ministers and campaigners say tougher rules are essential because voluntary action by platforms has been inconsistent, leaving many victims to chase removal of explicit images one site at a time.
Department for Science, Innovation and Technology officials have said that treating non-consensual intimate images with the same seriousness as child sexual abuse or extremist content is a key aim of the new legislation.
What Happens Next?
The amendment is expected to move through Parliament soon, with enforcement measures set to take effect later this year. In parallel, policymakers are also debating broader online safety reforms — including possible further penalties for platforms that fail to implement proactive protections.
From: IRIB



