The recent racist abuse directed at black England football players in the aftermath of the Euro 2020 finals was completely unacceptable, and has rightly amplified a national outcry for more decisive action against online hate.
The episode points to a worrying social schism, which has steadily grown over the last decade, in line with populist politics, and has manifested itself as abhorrent racism. The recent tournament merely shone a spotlight on something foul that has been bubbling beneath the surface for years.
During the pandemic, Report Harmful Content saw a 225 per cent rise in reports involving hate speech correlating with other NGO’s state of the nation reports indicating that the shift to online life has to some extent exacerbated the polarisation and segregation of communities.
In response, and under immense public scrutiny, the government has promised its Online Safety Bill will strongly encourage social media companies to take action when these nasty, offensive and personally damaging incidents occur.
Whilst I hope this legislation will make the industry more accountable, this is by no means a quick fix, everyone has a responsibility to counter hate online. The law commission’s review of hate crime laws will be instrumental in this and, once their proposals are released I hope the government will look on these favourably.
There’s always more social media firms themselves can do in responding to harmful content, but it can also be easy to rush to blame them outright. A reminder that this is a behavioural problem and there is obviously a much deeper cultural issue that needs to be addressed. To combat this horrible behaviour, platforms must adopt a zero-tolerance approach to this type of activity now.
It takes a combination of agencies, laws, and policies to effect change. For example, online hate against the English football players has sparked discussion around the use of a photo ID to set up any social media account.
On paper, this may seem like a good idea as it would make everyone more accountable for what they say, but in reality, this is unlikely to happen as there is a conflict with ‘accessibility’ and ‘privacy rights.’ Not everyone has or can share this type of ID, so this could mean thousands not being able to hold accounts.
Another suggestion being made is life-long bans, a good idea in principle and industry should be banning these people from their platforms, but this will not stop someone from using a VPN and creating a new fake account.
I think the key might be in AI detecting language before a post is even uploaded. Banning the use of racially offensive and derogatory words on any platform seems like a simple win. However, the conflict here is around freedom of expression and the subjective nature of harm online.
If this were to work, we’d need to be sensitive where certain terms have been re-appropriated by communities, who want to use those words and have the right to. Therefore, we must tread carefully so we don’t potentially silence others in our zeal to deliver an overarching solution to this problem.
I worry not enough serious action is currently being taken to stamp out this regressive and revolting behaviour, and the lacklustre reaction to the Euro 2020 incident perfectly highlights this.
Looking to society’s future, what kind of message does this send to our impressionable younger generation, who spend more time than ever before online and are at risk of being heavily exposed to this harmful online content?
Although social media platforms employ AI to detect this kind of content, they also have to rely on users reporting content that breaches guidelines. This brings it back to the fact that this is a behavioural problem, not a technological one and one that requires a human solution.
Monitoring for harmful content is something that everyone can take responsibility for, but currently, not enough people are aware they can be part of the solution, what they should be looking for or how to report it when they find it.
It is everyone’s responsibility to address racism, and there have been brilliant initiatives, working on the bystander approach where the ethos of not tolerating hate is instilled in communities. This could be as simple as calling out racist comments presented as banter.
No one should have to tolerate hate crime, and, in the same way we would report this offline, I would encourage anyone who encounters hate online to report it to the platform and relevant law enforcement body.
Kathryn Tremlett is the harmful content manager at South West Grid for Learning
For more info on how to report hate crime see the Report Harmful Content website.