Facebook rule-breakers to receive additional warnings before being ‘thrown in Facebook jail’

Facebook CEO Mark Zuckerberg speaks during a press conference in Paris (Bertrand Guay / AFP via Getty Images)
Facebook CEO Mark Zuckerberg speaks during a press conference in Paris (Bertrand Guay / AFP via Getty Images)

Meta is changing how it penalises users who fall foul of Facebook’s community guidelines.

The new policy will result in those who have violated petty rules receiving more warnings before the company metes out a temporary suspension or, as Meta puts it, before they get thrown in “Facebook jail”.

Meta enforces its community guidelines, which determine what you can and can’t post on Facebook and Instagram, by removing content and issuing strikes to violators. While the company immediately pulls posts that infringe its rules, it also reserves the right to apply a strike to an account based on the severity of the content, the context in which it was shared, and when it was posted.

Those who break stricter rules prohibiting the sharing of child exploitation or terrorist content, among other severe material, will still see their accounts disabled.

Under the new policy, Meta will issue seven strikes before it restricts an account from creating content for one day, including bans on posting, commenting, and creating a Page. Previously, it would toss a user in ‘Facebook Jail’ for one day after just two strikes.

Meta will ramp up the restrictions for every additional strike after the first seven. Eight strikes will land you in Facebook jail for three days, up from three strikes previously. Nine strikes will prevent you from posting for seven days, compared to four strikes in the past. Finally, 10 or more strikes will see you suspended for 30 days, up from five or more strikes previously.

Before you reach the threshold for the time-based penalties, you’ll receive a warning as part of your first strike, whereas two to six strikes will see certain features locked for a limited period, such as posting in groups.

Meta updated its policy based on feedback from its Oversight Board, an independent body that oversees appeals to its content moderation decisions. The group, created and funded by Meta, is composed of legal experts, human-rights scholars, and former journalists and politicians.

In its most high-profile case, the Oversight Board previously upheld Meta’s ban on Donald Trump for inciting violence as part of the US capital riot in 2021. But its recommendations also forced Meta to change the former president’s punishment from an indefinite ban to a two-year suspension. Trump’s accounts were restored earlier this year after the restriction came to an end, and ahead of the 2024 US elections.

The changes arrive as Meta undergoes a major overhaul of its biggest apps in the glare of increased regulatory oversight of social media platforms. On Sunday (February 19), Meta CEO Mark Zuckerberg announced that the company would allow users to pay to be verified on Instagram and Facebook, starting in Australia and New Zealand. Twitter has introduced a similar subscription service under the stewardship of CEO Elon Musk.

Meta, and other online platforms that host user-generated content, are also facing stricter punishments as a result of a new bill currently making its way through Parliament. The long-gestating Online Safety bill is designed to protect users from harmful content. Companies face large fines and, in the most extreme cases, execs could be jailed if they are found to have repeatedly breached the law.

Across the pond, the US Supreme Court is considering two legal challenges that could alter Section 230 of the Communications Decency Act, an almost 30 year-old law that protects internet platforms from liability for things people and third parties say or do on them.

Meta’s new policy follows a similar change enforced by its rival TikTok, with both companies insisting that educating rule-breakers can prevent repeat offenses.

“Our analysis has found that nearly 80 per cent of users with a low number of strikes do not go on to violate our policies again in the next 60 days,” Monika Bickert, Meta’s vice president of content policy, said in a statement. “This means that most people respond well to a warning and explanation since they don’t want to violate our policies.”

Bickert also acknowledged that Meta’s original system may have been too heavy-handed, and contained some blind spots that missed the nuances and context of some posts deemed offensive.

“Our previous system resorted quickly to long penalties, such as a 30-day block on a person’s ability to create content,” she said. “These long blocks were frustrating for well-intentioned people who had made mistakes, and they did little to help those people understand our policies.”