The Online Safety Bill has been a work in progress for a number of years. The most recent revision has resulted in culture secretary Michelle Donelan axing efforts to regulate legal but harmful content.
So what does this mean and why does it matter?
I’m not a legal expert but can see the dilemma in defining what is ‘harmful’. And sometimes what is harmful only becomes illegal over time because those affected campaign for it to be the case. I’m thinking here of ‘harmful’ behaviours such as marital rape which only became a crime in 1992 or ‘upskirting’ which became a crime in 2019. Until then, these acts were harmful to those impacted, but not illegal. And this is what worries me about the harm children and adults are exposed to online. Harm which is not illegal, is harmful nonetheless. As I go home on public transport it is perfectly legal for a stranger to take a photograph of me, sit unpleasantly close to me or try to engage me in a conversation despite my clear signal that I am uncomfortable. This is all perfectly legal. Is it harmful? I guess it depends on how it makes me feel. And critics would likely argue that I’m being overly sensitive and this is just part of life.
Yet when this kind of behaviour takes place online it is much more disinhibited. People take greater risks online, they are more inclined to say things that they wouldn’t say in person. One only has to look at a conversational thread on any major social media platform to see how quickly unpleasant exchanges and escalations occur. Some will argue that this is ‘contributing to the debate’; I would argue that it is creating an echo chamber with very little debate. Factions and groupings emerge, gathering momentum to make a point often with little interest in the perspectives of others. This is in contrast to the concept of debate which focuses on discourse and allows for new ideas and philosophy to emerge. Perhaps it is too early to know what new philosophies are emerging in this largely unmonitored, unregulated space.
My experience as a Psychologist has highlighted that children are indeed negatively impacted by activity that takes place online. Perhaps children have always taunted each other in the playground or compared themselves to others yet this has escalated to such unimaginable levels online– trolling, body shaming, deepfakes, cyberbullying and exposure to content around self-harm and suicide has grabbed our headlines over the years and deeply affected so many families. The harm children are exposed to can be incessant and continuous even when they are in the safety of their physical homes. The NSPCC (drawing on Home Office data in England Wales during 2021/22) estimated that more than 13,000 online child sex offences were recorded over the summer and more than 100 grooming offences are being recorded daily. This hardly fosters a sense of online safety and begs the question of how the Online Safety Bill will require technology companies to safeguard children.
My involvement as an Advisory Board Member and Lead Psychologist with Hidden Strength stems from a desire to create a safe online space for children and young people. I think technology can be used to create safe spaces, a sense of community and to share credible and useful information that can benefit many. Hidden Strength requires its users to engage in a robust age verification process, a technology only recently embraced by Instagram. This a measure I would hope becomes integral for all technology firms that seek to encourage children to use their platforms. In addition, the use of data is of paramount importance to understand and make decisions about harmful content. It is not easy to define, I agree, yet with the right for open expression comes an even greater responsibility to consider the impact this freedom has upon the most vulnerable groups in our society. I’m not sure the Online Safety Bill goes far enough; only time will tell.