Facebook currently employs 52 firms that it has partnered with to conduct fact checks. If content found on a particular Facebook post is proven to be false by one of the fact checkers, the post's distribution through users' news feed is reduced. And not surprisingly, Instagram's policy is basically the same. Stephanie Otway, a spokesperson for the app says, "Our approach to misinformation is the same as Facebook’s — when we find misinfo, rather than remove it, we’ll reduce its distribution." That means flagged posts will be removed from the Explore tab and the hashtag result page, but it will stay up on the author's page. That limits the readership of these polarizing posts to those who have made a decision to subscribe to authors who disseminate such information.
But there is a big difference between the two sites. The content on Instagram is not nearly as news-oriented as it is on Facebook, especially since the platform doesn't have hyperlinks inside captions or member comments. Thus, the hysteria is more subdued than on Facebook where a single lie could turn millions of members into a foaming at the mouth crazy. So unlike Facebook, photos that are fake won't be labeled and there will be no warning shown to Instagram members who want to share these images.
"We all know any kind of images and pictures are a main driver of misinformation in any platform. Alerting those who share (false posts) like they do on Facebook would be best. But perhaps it is only the beginning of their actions there, I suppose. Even though there are plenty of problems regarding misinformation inside Facebook’s many platforms, they are still the ones who are taking the combat of misinformation more seriously."-Tai Nalon, Director Aos Fatos
Facebook started using fact checkers for its flagship site back in 2016.