Anti-vaxxers ‘use carrot emoji’ on Facebook to avoid detection from moderators

File: Facebook app on a phone  (PA Wire)
File: Facebook app on a phone (PA Wire)

Anti-vaxxers are using the carrot emoji as a code on Facebook to avoid detection from moderators, according to a report.

Facebook groups in which anti-vaccine views are espoused are using the emoi instead of the word “vaccine” in an effort to beat Facebook’s moderation algorithms.

According to a BBC investigation, these groups are being used to share verified claims of people being either injured or killed by vaccines.

One of the groups had more than 250,000 members and instructed its members to use “code words for everything” and to never use the words Covid, vaccine or booster.

Marc Owen-Jones, a disinformation researcher, and associate professor at Hamad Bin Khalifa University in Qatar, told the BBC he was invited to join it.

“It was people giving accounts of relatives who had died shortly after having the Covid-19 vaccine", he said.

“But instead of using the words "Covid-19" or "vaccine", they were using emojis of carrots.

“Initially I was a little confused. And then it clicked - that it was being used as a way of evading, or apparently evading, Facebook’s fake news detection algorithms."

The NHS advises that vaccines are both safe and effective, and side effects of vaccination are typically very mild and do not last long.

The Standard contacted Meta, Facebook’s parent company for a comment and did not receive a response.

But a spokesperson told the BBC: "We have removed this group for violating our harmful misinformation policies and will review any other similar content in line with this policy.

“We continue to work closely with public health experts and the UK government to further tackle Covid vaccine misinformation.”

Last year, research published by the Oxford Internet Institute found that algorithms on social media sites often cannot recognise abusive or harmful use of emojis.

“Despite having an impressive grasp of how language works, AI language models have seen very little emoji. They are trained on a corpora of books, articles and websites, even the entirety of English Wikipedia, but these texts rarely feature emoji,” said researcher, Hannah Rose Kirk.

“The lack of emoji in training datasets causes a model to err when faced with real-world social media data—either by missing hateful emoji content (false negative) or by incorrectly flagging innocuous uses (false positives)”.