New feature via the social media site will protect the user from hate speech by filtering offensive words, phrases and emojis
Facebook’s Instagram has introduced a new digital tool to protect consumers from abusive messages via its channel.
When turned on, the feature filters direct messages (DM) containing offensive and abusive words or emojis and will hide them from the recipient.
The tool specifically targets DM requests from contacts that consumers do not follow, which, according to Instagram, is where the majority of users receive abusive messages.
Knowing that different words are more offensive to others, as part of the feature Instagram will allow customers to create their own personalised list of the words, phrases and emojis they do not want to see in their inbox.
If a user decides to open the filtered message, text will be covered unless they tap to uncover it, then there is the option to accept the request, delete or report the message.
The roll-out is an extension of Instagram’s comment filters, which launched back in 2018, that hides offensive comments and gives the user control to choose the terms of what onlookers can comment.
“This new feature is designed to help protect you from potentially offensive or abusive DM requests, while also respecting your privacy,” the social media site said in a statement.
“All DM requests that contain these offensive words, phrases or emojis – whether from your custom list or the predefined list – will be automatically filtered into a separate hidden request folder.”
The new measures are a proactive reaction to the social media phenomeonon’s decision to inflict stricter penalties on users who send abusive messages via its platform.
In February, the platform set in motion sanctions on users that send abusive DMs, banning them from sending messages for a period of time.
A repeat offender will have their account disabled, and new accounts created to get around messaging restrictions will also be closed down.