The internet is home to many wonderful things: instant messaging, on-demand streaming, online communities and endless cat videos. The internet brought us social media and a new method of instant communication where users can connect with friends, family and others with similar interests in seconds.
However, some internet users have used social media to communicate negatively by cyberbullying. Introducing: the internet troll.
Internet trolls are anonymous internet users that deliberately provoke others online by being offensive and bullying other users. They are experts in making other users angry, frustrated and upset. The potential anonymity of social networking sites permits trolls to relentlessly bully others without facing any consequences in real life.
Internet trolls are everywhere and they have no shame. Trolls will happily create multiple accounts to ensure their unwarranted opinions are heard multiple times. In recent months, many celebrities and football stars have been victims of foul online racism. Most noticeably on Instagram, the platform was unable to prevent these hateful comments and was called out by victims and allies for not doing enough to protect its users.
Recently drafted was the Online Safety Bill which imposes a duty of care on social media companies. The bill gives Ofcom the power to block access to sites and fine companies that do not protect users from harmful content. While the bill was created to protect users, many have been vocal about the implication on free speech, specifically around comments that may be legal but harmful. However, a hateful comment is still harmful even if it is legal; free speech doesn’t mean hate speech.
So what are social media platforms doing to protect users from these bullies?
What Twitter is doing to protect its users
“Leave This Conversation” button
Twitter’s latest interaction management development is a new “Leave This Conversation” option which allows users to avoid negative discussions. Although currently in the proposal stage, the feature would allow users to untag themselves from a discussion and prevent themselves from being mentioned again within the same discussion. Users would not be given any further notifications about that specific thread.
Twitter is the home of cancel culture, so the platform is giving its users power to avoid Twitter pile-ons based on a Tweet and to mute discussions causing them distress.
Twitter is experimenting with an “Unmention yourself” option. Users will be given an “Unmention yourself from this conversion” option in a tweet’s drop-down menu. Clicking this option will unlink your Twitter handle from the chat and original tweet; the text would remain, but users will not be able to visit your account through the tweet.
A similar option is already available for images on Twitter, where users can “Remove tag from photo” but unmentioning yourself gives users the chance to distance themselves from direct association within specific Twitter discussions.
Furthermore, within the unmention yourself options, users can:
- Prevent those that don’t follow them from mentioning them in tweets
- Proactively control and customise who can mention them for a dedicated amount of time
- Control mass mentions by pausing mentions for a certain amount of time.
Muting words, hashtags and accounts
Twitter gives users the ability to mute words, phrases, hashtags and accounts on the platform. Muting removes these tweets from notifications, push notifications, SMS, email notifications, timelines and tweet replies.
Found within “Privacy and safety” users can view and edit their “Muted” words and accounts. In addition, under the “Notifications” tab, users can mute by advanced filter that includes those:
- You don’t follow
- Who don’t follow you
- With a new account
- Who have a default profile picture
- Who haven’t confirmed their email
- Who haven’t confirmed their phone number
Twitter has recently rolled out an updated version of its potentially offensive prompts which utilises an improved detection algorithm. When tweeting or replying something potentially harmful, users will be presented with a pop-up that urges them to review the tweet before publishing.
The pop up informs the users of the offensive words used and allows the users to tweet, edit or delete the tweet. The updated algorithm monitors the relationship between author and replier and includes a better detection of strong language. It also offers users the chance to offer direct feedback in case the platform got something wrong.
What Instagram is doing to protect its users
Instagram is a popular social media platform for internet trolls. Often being found in comment sections, Instagram has introduced an option for users to combat on-platform abuse. Limits is found within the Privacy settings and enables users to limit unwanted comments and messages from selected groups.
Instagram suggests groups of accounts you may want to limit based on detected activity. Users can then hide interactions from these users unless they manually select to see them. You can limit interactions from accounts that don’t follow your or new followers, which can help reduce the impact from trolls or users jumping on the “cancel culture” train.
Instagram’s hidden words feature directly combats hate speech on the platform. The new feature allows users to filter offensive words, phrases and emojis in comments and direct message requests. Users are given different options to filter hate speech.
Automatically turned on is “hide comments” which prevents generic offensive content being shown on your profile to you and your followers. Users can opt to switch on “Hide more comments” and “Hide Message Requests” which moves potentially hateful messages to the hidden requests folder.
Users are also able to create a custom word list. They can add specific words, phrases and emojis to these lists and request that they are hidden from comments or messages. The personalisation of hidden words means users are able to protect themselves from harmful messages that don’t break Instagram’s speech rules.
Instagram introduced a comment warning feature a couple of years ago. The feature presents users with a warning before posting a potentially hateful and harmful comment in the hopes of appealing to the trolls’ humanity and preventing the comment ever being posted.
If this warning fails, Instagram has a feature that automatically hides comments similar to previously reported content. Rather than removing the comments completely, they are moved into a folder that is accessible by clicking “View Hidden Comments”. Users can reinstate comments if needed, but the platform wanted to be transparent about the types of comments it hides.
In a bid to protect underage users and children on the platform, Instagram makes under 16s’ accounts private by default. This means only approved followers can see posts, like and comment. Under 16s with pre-existing accounts will be sent a notification highlighting the benefits of switching to a private account.
What Facebook is doing to protect its users
Hide posts and accounts
Facebook users can request to hide content from specific accounts, groups and general posts. Hiding posts stops all posts from that person or about a certain topic from appearing on a user’s timeline. The content will be hidden until the user decides to unhide the content.
Facebook has a strike system that monitors the number of violations an account holds for posting content that goes against the Facebook Community Standards. Depending on which policy the content goes against, previous violation history and the number of strikes an account has, accounts can be restricted or disabled to prevent further posting.
Posts that go against Community Standards will be removed by Facebook. The user who posted the content will be informed of the removal and given the reason why it was removed. Strikes depend on the severity of the content and the context it was posted under. However, all strikes expire after one year.
In order to create a safer online environment, Facebook includes a warning screen over potentially sensitive content. This includes violent or graphic imagery, posts that contain descriptions of bullying or harassment if shared to raise awareness and posts related to suicide.
Facebook also issues a warning screen if a post shares false or misleading information.
Facebook has a strategy to remove, reduce and inform on misinformation on the platform. The platform does remove misinformation but in limited cases. These include when:
- Misinformation has the potential to cause imminent physical harm
- Misinformation has the potential to interfere with or suppress voting
- Videos are manipulated to mislead the average person to believe a video said words they did not say
What TikTok is doing to protect its users
TikTok has given creators the chance to “Filter All Comments”, where they can decide which comments will appear on their videos. When enabled, comments cannot be seen in the comment section unless the creator has approved them with the new comment management tool.
The filtered comments feature builds on the existing controls that allow creators to filter spam and offensive comments and specific keywords.
For users aged between 13 and 15, the comments section is limited to “Friends” or “No one”.
TikTok also offers a pop-up that urges users to consider before they comment when containing words that may be inappropriate or unkind. The prompt also contains a reminder about TikTok’s Community Guidelines and allows them to edit their comments before sharing.
Limited Stitch and Duets
TikTok has limited the Stitch and Duets feature for younger TikTok users. For those aged between 13 and 15, the Stitch and Duet feature is completely removed, which limits who the younger users can interact with on the platform.
For those aged between 16 and 17, the default Duet and Stitch setting is set to “Friends”.
Users aged between 13 and 15 are unable to make their videos downloadable. For those aged between 16 and 17, the default for downloads will be set to Off, but they can enable this if they choose to. A pop up box will reconfirm their choice to make their videos downloadable, and remind users their videos could be shared to other platforms.
What YouTube is doing to protect its users
YouTube has a feature that encourages users to reconsider hateful and offensive remarks before posting. The feature appears as someone is about to post an offensive comment and warns the users to “Keep comments respectful”. The popup then urges users to edit their comment.
YouTube is testing giving creators the chance to hide offensive and hurtful comments that have been held for review.
YouTube Studio users can choose to auto-moderate inappropriate comments that they can manually review and choose to review, hide or report.
The video platform is currently developing an AI-powered system that should be able to detect offensive content based on content that is repeatedly flagged by users.
If you want to receive our industry insights, visit our Influencer Marketing & Social Media blogs here.Contact us
Most Popular Instagram Hashtags | Tiktok Hashtags | Instagram Monetization | Facebook Banner Size | Snapchat Influencers | Most Subscribed Youtubers | Best Time to Post on Youtube | UK Twitch Streamers | Female Twitch Streamers | Popular Tiktok Songs | Male Tiktok Influencers | Lgbtq Tiktok Influencers | The Rise and Fall of Clubhouse | Influencer Marketing on Clubhouse | LiketoKnowit | Pretty Little Thing Instagram| Social Marketing Agency