
Online Safety Act Details New Child Rules for the UK
Digital Shield: Unpacking the UK's Online Safety Act and its Global Impact
Technology giants conducting business within the United Kingdom now shoulder greater responsibility for safeguarding minors against harmful online material. This shift follows the introduction of new safety regulations by Ofcom, the nation's media regulator. The move responds to growing concerns about the digital environment young people navigate daily. Research starkly illustrated the scale of the issue. Studies found a significant percentage of surveyed thirteen- to seventeen-year-olds encountered material considered possibly damaging online within a preceding month.
This statistic highlights the pervasive nature of online risks and the urgent need for robust protective measures. The government designed the Online Safety Act 2023 to create a safer online space for both children and adults, imposing fresh legal obligations upon businesses running social platforms and search services. Companies must now demonstrate greater accountability for user safety within their services, marking a significant change in the regulatory landscape for digital services accessible within the UK.
The Act's Core Mandate: Protecting Children
Legislation known as the Online Safety Act received Royal Assent on 26 October 2023, transforming from a bill into law. Its primary objective involves shielding children from harmful online experiences. Ofcom, tasked with implementing and enforcing the legislation, has finalised crucial regulations for protecting children. These protocols target social media platforms, search engines, gaming applications, and websites. They officially become legally binding starting 25 July 2025, marking a key date for compliance. The regulator confirms these rules aim specifically to stop minors seeing the worst types of detrimental content.
This includes material concerning self-destruction, self-injury, conditions involving eating, plus pornographic content. Reports indicate the average age children first encounter online pornography is thirteen, with some seeing it much younger, underlining the necessity of these protections. Beyond this, the Act seeks to shield youngsters against content that is anti-woman, aggressive, spiteful, or insulting. It also covers dangers like cyber-intimidation along with risky online dares.
Ofcom's Phased Implementation Strategy
Ofcom adopted a phased approach to bring the Online Safety Act's various duties into effect, acknowledging the complexity of the new regime. Phase one concentrated on illegal harms. Ofcom published its illegal harms statement and codes of practice in late 2024. All regulated services had to complete an illegal content risk assessment early in 2025. Shortly after, Ofcom gained enforcement powers over these illegal content duties. Phase two focuses on child safety, pornography, and the protection of women and girls. Guidance on age assurance for pornography sites was issued early in 2025, with related duties becoming enforceable shortly after.
Guidance for children's access assessments also appeared early in 2025, giving services three months to complete these. Ofcom expects to publish its final Protection of Children codes and risk assessment guidance in spring 2025. Services likely accessed by children must then conduct a children's risk assessment by summer 2025. Child protection duties are anticipated to become enforceable around that same time. Phase three addresses duties for categorised services, requiring additional transparency and accountability measures from larger platforms. The government laid draft regulations for categorisation thresholds in late 2024, with Ofcom expecting to publish the register of categorised services in summer 2025. Full implementation of the legislation is expected during 2026.
Image Credit - Freepik
Concrete Requirements for Tech Companies
Businesses wishing to maintain their UK presence must implement over 40 specific, practical measures outlined by Ofcom. A central requirement involves altering the algorithms that dictate content appearing within minors' content streams. Companies must actively screen out damaging items, moving beyond simple content moderation. They must also introduce more rigorous processes for confirming age or age assurance. This check establishes whether users accessing potentially detrimental material or services are under eighteen.
Methods considered 'highly effective' by Ofcom include photo ID matching, facial age estimation, and checks using open banking or credit cards, while safeguarding user privacy. Companies face a mandate to take down flagged detrimental items faster than before. They must also provide support systems for youngsters affected by seeing such content. Furthermore, each firm needs to appoint a specific, named individual with responsibility for protecting minors within their organisation. These designated persons will oversee annual reviews assessing how effectively the platform manages risks to its younger users.
Penalties for Non-Compliance
The Online Safety Act grants Ofcom significant enforcement powers to ensure companies adhere to their new duties. Non-adherence can result in substantial financial penalties. Fines can reach up to £18 million or ten percent of the company's global qualifying revenue, whichever figure is greater. This represents a serious financial deterrent, particularly for major international corporations. Beyond fines, the legislation introduces the possibility of criminal liability for senior managers.
Executives could face prosecution and potentially imprisonment if they impede Ofcom's information requests or obstruct investigations. In the most serious cases of non-compliance or ongoing risk to users, Ofcom possesses the authority to request judicial intervention. Such an order could prevent the offending website or application from being accessible within Britain, effectively blocking the service. This ultimate sanction underscores the gravity with which the UK government views online safety obligations. Ofcom has already established a dedicated taskforce to monitor 'small but risky' platforms, indicating a readiness to use its enforcement tools swiftly against non-compliant services, regardless of size.
Addressing Illegal Content Across Platforms
A fundamental pillar of the Online Safety Act requires all covered companies to demonstrate a robust commitment towards eliminating unlawful material from their services. This duty applies broadly, encompassing user-to-user platforms and search services alike. Platforms must implement systems and processes designed proactively to minimise the risks of their services facilitating illegal activities. They also need effective mechanisms to remove unlawful items swiftly once discovered. The legislation specifies a schedule of key unlawful acts requiring proactive measures.
This group includes severe harms like material showing child sexual abuse (CSAM) and terrorism content. Other targeted illegal activities cover conduct involving control or coercion; acts of severe sexual aggression; encouragement towards self-destruction or self-injury; and facilitating the sale of unlawful substances or armaments. Search services specifically have duties to reduce the likelihood that users encounter illegal items through their search results. Ofcom's illegal content codes of practice provide detailed measures companies can adopt to fulfil these duties.
New Criminal Offences Introduced
The Online Safety Act significantly updated the legal landscape by creating several new criminal offences targeting specific online harms. One key offence is 'cyber-flashing'. This criminalises intentionally sending or giving someone a photograph or film of genitals with the intent to cause alarm, distress, or humiliation, or for sexual gratification while being reckless to the potential harm caused. The maximum penalty is two years' imprisonment. Another new offence targets the sharing of 'deepfake' pornography.
It makes it illegal to share or threaten to share an intimate photograph or film (including digitally altered 'deepfake' images) without consent and with intent to cause distress. The government has also signalled intentions to criminalise the creation of sexually explicit deepfakes, further strengthening protections. Additionally, the legislation introduced offences for false communications intended to cause non-trivial harm, threatening communications, sending flashing images electronically to cause harm to individuals with epilepsy, and encouraging or assisting serious self-harm. These new laws took effect earlier in 2024.
Image Credit - Freepik
Epilepsy Trolling and Self-Harm Encouragement
The legislation specifically addresses the malicious sending of flashing images electronically, sometimes termed 'epilepsy trolling'. It creates two distinct offences. The first involves sending a communication containing flashing images electronically where it is reasonably foreseeable someone with epilepsy might view it, and the sender intends to cause harm (a seizure, alarm, or distress) or is reckless about causing harm. The second offence involves showing flashing images via an electronic device to someone the perpetrator knows or suspects has epilepsy, intending to cause harm.
These offences carry a maximum penalty related to five years inside prison. Healthcare professionals acting in their capacity are exempt. Another significant addition involves creating an offence of encouraging or assisting serious self-harm. This targets acts capable of encouraging or assisting another person's serious self-harm, including through threats or pressure. Internet service providers merely providing the means for communication are not liable under this section. These provisions reflect a growing recognition of specific, severe harms facilitated through online communication.
Criticism: Is the Act Strong Enough?
Despite its broad scope, the Online Safety Act faces criticism from campaigners who believe it does not go far enough. Several advocates argue for even stricter regulations on technology companies. Some propose banning social media entirely for individuals under sixteen. The Molly Rose Foundation, established following the tragic death of fourteen-year-old Molly Russell after exposure to harmful online content, voices significant concerns. Ian Russell, Molly's father and the foundation's chairman, expressed disappointment regarding what he saw as insufficient scope within Ofcom's implementation codes, questioning if they truly fulfil Parliament's original intent.
The foundation argues that Ofcom's 'checklist' approach might allow large platforms to reduce existing safety efforts while still meeting compliance, potentially weakening protections. They, along with others like the NSPCC, have called for the government to strengthen the legislation, particularly regarding enforcement and addressing systemic failures in content moderation by major platforms. Recent reports suggesting the law might be weakened for a US trade deal drew strong condemnation from these groups.
The Encryption Conundrum
A major point of contention revolves around private messaging services and end-to-end encryption (E2EE). The child protection group NSPCC maintains the legislation provides insufficient protection in this area. They argue that E2EE services, which prevent even the service provider from reading messages, represent a significant danger for minors. E2EE can effectively blind platforms to material depicting child sexual abuse (CSAM) and grooming activities shared within private chats. Data has highlighted a dramatic rise in online grooming offences, with certain platforms frequently cited.
Campaigners point to a perceived 'loophole' in Ofcom's codes, requiring action only where "technically feasible," potentially letting encrypted services avoid robust scanning. While the legislation gives Ofcom powers potentially to force platforms to use or develop technology to scan for CSAM even in encrypted environments, the feasibility and implications remain fiercely debated. Technology companies like WhatsApp have strongly resisted measures that could compromise E2EE, arguing it fundamentally undermines user privacy and security globally, potentially creating 'backdoors' exploitable by malicious actors. This pits child safety imperatives directly against privacy rights.
Privacy and Free Speech Concerns
From an alternative perspective, privacy campaigners and civil liberties groups express significant reservations about the Online Safety Act's impact on fundamental rights. Organisations such as Big Brother Watch argue the fresh regulations endanger individual liberties of expression and privacy. They contend the legislation grants excessive power to regulate online speech, potentially leading to censorship of lawful content and giving state backing to tech companies' own restrictive terms of service. Concerns persist despite the removal of the original controversial "legal but harmful" provisions for adults. The introduction of the "false communications" offence has drawn particular criticism, especially after initial arrests during recent public disturbances. Critics argue the police applied the threshold too broadly, potentially criminalising inaccurate statements rather than intentionally harmful falsehoods, thus chilling free speech. Other groups have also warned the law could fundamentally undermine human rights online. Finding the right balance between safety and freedom remains a central challenge.
Image Credit - Freepik
Age Verification: Intrusive and Effective?
The legislation's emphasis on more rigorous processes for confirming age also fuels criticism from privacy advocates. Mandating age checks for access to certain content or services raises concerns about data security and privacy intrusion. Critics argue that digital age check systems, whether using ID documents or biometric data like facial scans, carry inherent risks. Potential issues include security breaches exposing sensitive personal data, privacy violations through data collection and profiling, errors in age estimation technology, digital exclusion for those unable or unwilling to provide verification, and potential censorship if access is wrongly denied. Open Rights Group highlights that the requirement could extend broadly to many user-to-user services, not just adult content sites, forcing widespread data collection. While Ofcom's guidance suggests various methods and stresses privacy safeguarding, the effectiveness and proportionality of mandatory age verification remain questioned, with some arguing that robust parental controls and user education offer less intrusive alternatives.
Children's Online Habits and Exposure
Understanding children's online behaviour provides crucial context for the legislation. Research indicates young people between the ages of eight and seventeen spend considerable time online daily, ranging from two hours up to five. Mobile phone ownership is nearly universal among those over twelve. Video consumption on platforms like YouTube and TikTok dominates their online activity. While approximately fifty percent of youngsters above twelve feel being online positively impacts their psychological wellbeing, significant risks persist. The Children's Commissioner's findings paint a concerning picture. Half of surveyed thirteen-year-olds reported encountering "hardcore, misogynistic" pornography via social networking platforms. They described content touching on self-destruction, self-injury, plus conditions affecting eating as "prolific," and violent material as "unavoidable." Recent crime surveys showed over a quarter of 10- to 15-year-olds had seen online content showing youth violence or drug dealing in the past year. This highlights the environment the legislation aims to reform.
Parental Controls: Tools and Limitations
Existing parental controls offer some level of protection, and their use is encouraged. The NSPCC emphasizes the critical importance of parental conversations with their offspring about internet safety and active engagement with their online lives. Data suggests a significant majority of parents (two out of three) utilise controls to filter or limit online content. Resources are available providing step-by-step guides for setting up controls. This guidance covers managing accounts on social networks, video sites like YouTube, and popular games such as Roblox or Fortnite.
Advice extends to controls for mobile phones and home internet connections. However, these tools are not foolproof. Statistics indicate roughly twenty percent of youngsters can circumvent safeguards set by parents, highlighting their limitations. Furthermore, parental awareness might be lacking; studies show only a minority of parents know the correct minimum age for most social media platforms. This underscores the need for platform-level safety measures mandated by the legislation, complementing parental efforts.
Digital Literacy: A Crucial Complement
Beyond technical controls and platform regulations, fostering digital literacy is increasingly recognised as essential for online safety. Both parents and children need the skills and knowledge to navigate the complexities of the online world safely and critically. Safer Internet Day initiatives, for instance, aim to raise awareness and promote safer online practices. Research linked to such initiatives found that while many children were aware of the day, awareness of the Online Safety Act itself was lower among both children and parents. Educational programmes in schools play a key role.
Reports show that while most children found online safety lessons useful, this figure jumped significantly for those receiving regular instruction. Equipping young people to identify risks, understand manipulative algorithms, discern misinformation, manage their digital footprint, and know where to seek help empowers them beyond what technical filters alone can achieve. The legislation acknowledges this indirectly by requiring transparency and reporting mechanisms, but ongoing educational efforts remain paramount.
Tech Industry Response and Preparation
The technology industry has been actively responding to the impending regulations. Since the Online Safety Act received Royal Assent, major platforms have been introducing new safety features and product updates, partly in anticipation of the formal requirements. Companies like Meta, Google, TikTok, and others are investing in compliance measures, including refining content moderation systems, developing age assurance technologies, and adapting algorithms. Partnerships are forming between platforms and safety tech providers; for example, companies specializing in facial age estimation are working with social media giants.
The UK's safety tech sector is growing rapidly, developing innovative solutions. However, compliance presents challenges, particularly for smaller platforms that may lack the resources of tech giants. The transition period leading up to the initial enforcement dates was crucial for alignment. Industry bodies emphasize collaboration between government, regulators, and industry to ensure the legislation is implemented effectively and proportionately, allowing space for innovation while achieving safety goals.
Image Credit - Freepik
Global Context: OSA vs. EU's DSA
The UK's Online Safety Act operates within a global trend of increasing internet regulation. Comparing it with the European Union's Digital Services Act (DSA) reveals similarities and key differences. Both aim to create safer online environments and increase platform accountability. However, the DSA generally takes a broader approach, covering issues like illegal goods, dark patterns, and intellectual property, alongside illegal content. The OSA is more narrowly focused on specific illegal and harmful content types, particularly concerning child safety.
The OSA mandates more proactive monitoring and filtering duties, especially for illegal content and material harmful to children, whereas the DSA leans more towards reactive notice-and-takedown procedures for illegal content (though very large platforms have additional risk assessment duties). The OSA categorises services based on risk and user numbers, imposing tiered obligations, similar but distinct from the DSA's thresholds for Very Large Online Platforms and Search Engines. Enforcement also differs: Ofcom enforces the OSA domestically, while the DSA relies on national coordinators and the European Commission for oversight of the largest platforms.
The Path Forward: Challenges and Expectations
As the Online Safety Act moves into full implementation, significant challenges remain. Ensuring consistent and effective enforcement across diverse platforms, from global giants to smaller niche services, will test Ofcom's resources and capabilities. The technical complexities of measures like age verification and algorithmic filtering require ongoing development and refinement to be both effective and privacy-preserving. The inherent tension between protecting users (especially children) from harm and upholding rights to free expression and privacy will continue to demand careful balancing by Ofcom and the platforms themselves.
The debate over encryption and access to private messages for law enforcement purposes remains unresolved and highly contentious. Ultimately, the legislation's success will depend not just on regulatory action but also on continued technological innovation, industry cooperation, robust public education on digital literacy, and international collaboration to address the internet's borderless characteristics. The coming years will reveal whether this landmark legislation achieves its ambitious goal of making the UK significantly safer online.
Recently Added
Categories
- Arts And Humanities
- Blog
- Business And Management
- Criminology
- Education
- Environment And Conservation
- Farming And Animal Care
- Geopolitics
- Lifestyle And Beauty
- Medicine And Science
- Mental Health
- Nutrition And Diet
- Religion And Spirituality
- Social Care And Health
- Sport And Fitness
- Technology
- Uncategorized
- Videos