Skip to main content

New report signals more work needed in child online safety and privacy protections

Thursday 14 May 2026
Young girl gazing at tablet
Credit: Unsplash

Some major tech giants are increasingly relying on parental controls rather than building stronger child protections by default, according to a new report published by researchers at the Digital Futures for Children centre at LSE and the 5Rights Foundation.

Key findings from Impact of Regulation on Children’s Digital Lives: Phase 2 include:

  • Evidence of ‘by default’ protection on platforms such as gaming, AI chatbots and social media.
  • The four largest platforms – Meta, Google, TikTok and Snapchat – have shifted away from protective ‘by default’ design changes and towards end-user tools such as parental controls, reversing the trend identified in earlier research.
  • Age assurance is now widely used across platforms accessed by children, but implementation remains inconsistent and largely unaudited.
  • Enforcement activity by Ofcom, the Information Commissioner’s Office and the European Commission is intensifying in 2026, but there is not yet evidence this is driving broader strategic change.
  • Significant child safety risks, including compulsive and problematic engagement and ongoing content harms, continue across many platforms despite existing regulatory measures.
  • The report calls for a fundamental shift towards a product safety model for children’s digital experiences, including mandatory pre-launch child safety testing and stronger regulation of AI chatbots.

Impact of Regulation on Children’s Digital Lives: Phase 2 examines how UK and EU regulation shaped the design and governance of platforms used by children between 2024–2026, including social media services, gaming platforms and AI chatbots.

The research follows an earlier 2024 report from the Digital Futures for Children centre and 5Rights Foundation which found that legislation and regulation had driven 128 child safety and privacy changes across Meta, Google, TikTok and Snapchat between 2017–2024, with many focused on ‘by default’ protections such as private account settings for teenagers.

This second phase of the study has analysed 108 child safety and privacy changes introduced across 70 online platforms between 2024–2026 with researchers gathering evidence by reviewing platform announcements and policies and by contacting companies directly for information about child safety measures introduced.

The findings show that more needs to be done to enable regulation to drive safety and privacy by design across major platforms, despite the implementation of measures including the UK Online Safety Act, the Age Appropriate Design Code and the EU Digital Services Act.

Researchers found that for Meta, Google, TikTok and Snapchat that while ‘by default’ protections were the largest category of change between 2017–2024, the biggest area of change during 2024–2026 was end-user tools such as parental controls, noting that there is little independently audited evidence about the effectiveness or uptake of many of these tools.

The report also found that age assurance measures are now widespread across platforms, with researchers identifying 12 different forms of age assurance in use, including facial age estimation. 11 platforms still relied on self-declaration alone, which the report noted is not regarded as an effective mechanism.

Beyond the largest social media platforms, evidence revealed that regulation is likely to be driving new child safety and privacy measures across a wider range of services, including video streaming, gaming and AI platforms, with a warning that regulatory gaps around AI chatbots mean companies are often setting their own standards in the absence of clear guidance.

Recommendations in the report include strengthening ‘safety by design’ obligations within the Online Safety Act, mandatory pre-launch child safety testing for high-risk digital services, greater transparency and independent auditing of safety measures, expansion of the Online Safety Act to explicitly cover AI chatbots, and stronger mechanisms for collective legal action and regulatory enforcement.

Steve Wood, founder of PrivacyX Consulting, ex-ICO Deputy Commissioner and author of the report commented: “The research shows us that regulation is continuing to have an impact, including in the use of ‘by default’ measures, particularly age assurance. Problematically, it reveals that, while significant risks persist online, some major platforms have shifted from providing ‘by default’ protections to providing end-user tools, which, in turn, shifts responsibility for online safety to the child. Current regulations retain significant potential to make a major improvement in children’s lives, provided the laws, codes and guidance are further clarified, and backed by strong enforcement.”

Read Impact of Regulation on Children’s Digital Lives: Phase 2