Twitter tests a prompt that tells you to rethink offensive replies.
Twitter Inc will test users to reconsider using “offensive or hurtful language” by prompting them before the post replies.
This move is an effort by the social media platform to clean up vulgar conversations.
When users hit “send” on their reply, Twitter will alert users if the words in their tweet are similar to reported posts and ask if them to revise it before posting.
Twitter for some time now has had challenges in controlling hateful and abusive content on its platform.
Twitter’s global head of site policy for trust and safety, Sunita Saligram said, “We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret”.
Twitter’s policies frowns on users who target individuals with degrading content, slurs, racist or sexist tropes.
According to Twitter’s transparency report, action was taken against almost 396,000 accounts under its abuse policies and over 584,000 accounts under its hateful conduct policies between January and June last year.
Twitter said the experiment, is first of its kind which will last for only a few weeks. It will run world wide but only for English-language tweets.
Similarly, Instagram rolled out test for its users that would “nudge” them with a warning before they post a potentially offensive comment. The company later offered an update on its efforts saying “Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance”.
Twitter, Inc. is an online social networking and microblogging service which offers users the ability to follow other users activity, read, and post tweets. The company which was founded in 2007 serves customers worldwide.