Twitter first introduced the image cropping feature back in 2018, saying it was meant to highlight “salient” areas of the image by using neural network algorithms to predict where someone is likely to look. As it turns out, this feature can dish out "interesting" results - many users pointed out that the AI cropping tends to pick lighter-skinned people, regardless of the original framing. The scandal gained momentum and users conducted many new "experiments", testing the algorithms with images of a wide variety, including cartoon characters and animals.
Twitter acknowledged the issue in a new blog post
, saying that: "While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product." The company is working on a fix that will bring the “what you see is what you get” principles of design, meaning quite simply: the photo you see in the Tweet composer is what it will look like in the Tweet.
There's no information on when these new changes will arrive but Twitter is committed to give people more choices for image cropping and previewing in order to reduce the risk of harm.