Twitter will prompt users to reconsider sending offensive messages, as it tries to clean up conversations on the platform.
The company announced on Tuesday that it would be “running a limited experiment on iOS” where you can revise your reply to a tweet before it is published, if they see that it contains language that could be harmful.
In an interview with Reuteurs, Sunita Saligram, Twitter’s global head of site policy for trust and safety, said: “We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret.”
Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content, but the company has been criticised for allowing this content to exist.
The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
“When things get heated, you may say things you don't mean,” Twitter said in a statement.
“To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
Last month, the company banned tweets supporting 5G conspiracy theories that encourage vandalism.
Twitter said it would remove tweets linking 5G with the coronavirus pandemic if they "incite people to action".
It updated its policy to include tweets that could cause mass panic or unrest.
In March, the national sex abuse inquiry said that tech giants should be forced to screen every video and image before it is posted to block child abuse material appearing on their sites.
The Independent Inquiry into Child Sexual Abuse (IICSA) called on the Government to step in to make social media and search companies deploy automatic scanning software and databases that catch abuse images before they are uploaded.
The recommendation came as IICSA said its investigation into online abuse found tech firms' actions often appeared to be motivated more by protecting their own reputations rather than keeping children safe from paedophiles.