In the world of internet, social media companies go to extreme lengths to silence automated bots, trolls, and other ugly byproducts of the digital era, otherwise, the quality of their online communities suffers. It’s pretty easy for companies like Twitter and Instagram to detect obvious spammy accounts, and also to automatically disable or lock the fakes. But what happens when social media giants quietly silence — or shadowban — certain accounts for no apparent reason?
Shadowbanning is, fittingly, a rather shadowy practice, the very existence of which has been debated for several years. In short, it refers to the idea of social media networks intentionally reducing the reach of specific users.
For instance, you might post a photo to Instagram using the same hashtags as always but see only a fraction of the engagement of prior posts. Maybe your images don’t show up at all when you — or other users — search for your hashtags, or only your current followers can see those posts, meaning you’re unable to reach potential new followers. Perhaps a search for your own username turns up blank.
What Is Shadowbanning?
At its best, shadowbanning would theoretically cut out bot-type accounts or users who violate terms of service to improve the quality of its communities. At its worst, it might be a nearly invisible type of censorship … maybe to silence certain ideologies, or perhaps as a nefarious way for companies to insert more sponsored content posts (read: ads) in lieu of real people.
Twitter denied any intentional swaying of its search features. And Instagram has denied shadowbanning as well. Generally, these entities issue statements referring to the algorithms that find and distribute information to the masses; the algorithms automatically attempt to determine what information holds the most value to certain people and make it more or less visible in the community. They also point out that updates to these algorithms may cause a user’s influence to wax and wane.
Regardless, those algorithms are trade secrets, and thus, it’s not in the best interest of these companies to reveal their inner workings.
How about other Social Media ?
In March, an investigation done by The Intercept unearthed an internal document that confirmed TikTok once, “instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform.” This is certainly the most damning example of TikTok’s shadow banning practices. But the platform told The Intercept that many of the guidelines outlined in the article are, “are either no longer in use, or in some cases appear to never have been in place,” and that they were a misguided effort to prevent bullying. Since then, TikTok has been sharing insights into how content is circulated.