According to the media watchdog, every child interviewed had viewed violent material online
Research from the media watchdog has found that violent online content is now “unavoidable” for children in the UK, with many first exposed to it while still in primary school.
According to the Ofcom study, every British child interviewed had viewed violent material on the internet. This ranged from videos of local school and street fights shared in group chats to explicit and extremely graphic violence, including gang-related content.
Although children were aware of even more extreme material available in the deeper recesses of the web, they had not actively sought it out themselves, the report concluded.
These findings led the NSPCC to accuse tech platforms of neglecting their duty of care to young users.
Rani Govender, a senior policy officer for child safety online, expressed deep concern, stating, “It is deeply concerning that children are telling us that being unintentionally exposed to violent content has become a normal part of their online lives. It is unacceptable that algorithms are continuing to push out harmful content that we know can have devastating mental and emotional consequences for young people.”
The study, conducted by the Family, Kids, and Youth agency, is part of Ofcom’s preparation for its new responsibilities under the Online Safety Act, enacted last year. This act granted the regulator the authority to take action against social networks that fail to protect their users, especially children.
Gill Whitehead, Ofcom’s online safety group director, remarked, “Children should not feel that seriously harmful content – including material depicting violence or promoting self-injury – is an inevitable or unavoidable part of their lives online. Today’s research sends a powerful message to tech firms that now is the time to act so they’re ready to meet their child protection duties under new online safety laws. Later this spring, we’ll consult on how we expect the industry to ensure that children can enjoy an age-appropriate, safer online experience.”
Nearly all the children and young people interviewed by Ofcom mentioned almost every major tech firm, but Snapchat and Meta’s apps Instagram and WhatsApp were mentioned most frequently.
The report states, “Children explained how there were private, often anonymous, accounts existing solely to share violent content – most commonly local school and street fights. Nearly all of the children from this research who had interacted with these accounts reported that they were found on either Instagram or Snapchat.”
“One 11-year-old girl said, ‘There’s peer pressure to pretend it’s funny. You feel uncomfortable on the inside, but pretend it’s funny on the outside.’ Another 12-year-old girl described feeling ‘slightly traumatized’ after being shown a video of animal cruelty: ‘Everyone was joking about it.'”
Many older children in the study “appeared to have become desensitized to the violent content they were encountering.” Professionals also expressed particular concern about violent content normalizing violence offline, noting that children tended to laugh and joke about serious violent incidents.
On certain social networks, exposure to graphic violence originates from the top. Recently, Twitter, now known as X following its acquisition by Elon Musk, removed a graphic clip depicting sexual mutilation and cannibalism in Haiti after it had circulated widely on the platform. Musk himself had reposted the clip, tweeting it at news channel NBC in response to a report by the channel that accused him and other right-wing influencers of spreading unverified claims about the situation in the country.
Other social platforms offer tools to help children avoid violent content, but these tools provide little assistance. Many children, as young as eight, informed the researchers that they could report content they did not want to see, but they lacked trust in the system’s effectiveness.
In private chats, they were worried that reporting would label them as “snitches,” leading to embarrassment or punishment from peers. Additionally, they did not believe that platforms would enforce meaningful consequences for those who posted violent content.
The prevalence of powerful algorithmic timelines, such as those on TikTok and Instagram, added another layer of complexity. There was a common belief among children that engaging with violent content (for example, by reporting it) would increase the likelihood of being recommended similar content.
Professionals in the study expressed concern that violent content was impacting children’s mental health. In a separate report released on Thursday, the children’s commissioner for England revealed that over 250,000 children and young people were awaiting mental health support after being referred to NHS services. This means that one in every 50 children in England is on the waiting list. For those who accessed support, the average waiting time was 35 days. However, in the last year, nearly 40,000 children experienced a wait of more than two years.
A spokesperson for Snapchat stated, “There is absolutely no place for violent content or threatening behavior on Snapchat. When we find this type of content, we move quickly to remove it and take appropriate action on the offending account. We have easy-to-use, confidential, in-app reporting tools and work with the police to support their investigations. We support the aims of the Online Safety Act to help protect people from online harms and continue to engage constructively with Ofcom on the act’s implementation.”
Meta has been contacted for comment, while X declined to comment.