Instagram Reduces Showing Malicious Content


 Instagram has taken a new step to make potentially harmful content less visible on its app.

The social network, owned by Meta, said the algorithm that supports posting in messages (DMs), in feeds and in Stories, users will now not prioritize content that may contain intimidation, hate speech, or that could incite violence.



While Instagram's rules already prohibit many of these types of content, the changes could affect the post limit or content that hasn't reached the app's moderators.


"To understand if something violates our rules, we will look at things, for example, descriptions that violate our rules," explained Instagram in its latest update announcement as reported by us from Engadget.


To date, Instagram has tried to hide potentially objectionable content from publicly viewable parts of the app, such as the Explore tab but hasn't changed its appearance for users who follow accounts that post this type of content.



The latest change means that posts deemed "similar" to previously deleted posts will be much less visible even to followers.


A Meta spokesperson confirmed that potentially harmful posts will later be removed if they violate its community guidelines.


This update follows a similar change in 2020, when Instagram began downgrading accounts that shared misinformation that fact checkers denied. However, unlike the changes, Instagram says that the new policy will only affect individual posts and not the account as a whole.


Additionally, Instagram says it will now take into account each user's reporting history on how to view their feed.


"If our system predicts you're likely to report a post based on your reporting content history, we'll show fewer posts in your feed," says Instagram.

Previous Post Next Post

Contact Form