When Instagram first launched, it was known for the filters that users could apply to their photos, which would then be shared among members. The company was acquired by Facebook in 2012 for a reported $1 billion. Instagram is not as well known as Facebook is for inflaming the passions of subscribers by spreading fake news and political propaganda. Still, a report from Poynter published today (via The Verge) says that steps are being taken to prevent the dissemination of false posts over the platform. Instagram is currently running tests with fact checkers.
Facebook currently employs 52 firms that it has partnered with to conduct fact checks. If content found on a particular Facebook post is proven to be false by one of the fact checkers, the post's distribution through users' news feed is reduced. And not surprisingly, Instagram's policy is basically the same. Stephanie Otway, a spokesperson for the app says, "Our approach to misinformation is the same as Facebook’s — when we find misinfo, rather than remove it, we’ll reduce its distribution." That means flagged posts will be removed from the Explore tab and the hashtag result page, but it will stay up on the author's page. That limits the readership of these polarizing posts to those who have made a decision to subscribe to authors who disseminate such information.
But there is a big difference between the two sites. The content on Instagram is not nearly as news-oriented as it is on Facebook, especially since the platform doesn't have hyperlinks inside captions or member comments. Thus, the hysteria is more subdued than on Facebook where a single lie could turn millions of members into a foaming at the mouth crazy. So unlike Facebook, photos that are fake won't be labeled and there will be no warning shown to Instagram members who want to share these images.
Facebook started using fact checkers for its flagship site back in 2016.