Meta Doesn't Want To Remove Hateful Content, They Just Want You To See It Less

If You Don’t See It, Is It There?

With X on a path to self-destruction and TikTok on the verge of total ban in the US, Meta is in a prime position to absolutely dominate the social media landscape once again. The bad news is that it’s no picnic on their platform either.

It was previously reported by GLAAD, the non-profit organization for LGBTQ advocacy, that the Facebook, Instagram, and Threads parent company, Meta has done nearly nothing regarding anti-trans hate on its platforms, more often than not opting to do nothing at all when the content is reported as hateful, violent, or otherwise “restricted” type of content. According to that report, the platform was not responding to or addressing reported hate speech at all. The GLAAD report was very thorough in providing evidence of the types of content flourishing on the platform. If you haven’t I encourage you to give it a read. Below are a few samples from the post of the types of content maintained on Meta platforms, be warned, they are cruel.

Meta, it would seem, is hiding behind its algorithmic feed curation as a solution to the problem. Showing you only content you want to engage with rather than showing you the kinds of content that would otherwise upset you or drive you away. This may be why you aren’t exposed to this type of content as often, skewing your perception that the platform is safer. It’s not removing hate speech; it’s just not showing it to you anymore. This is a problem because it is allowing the platform to not lose out on a user base of hateful bigots, rather forcing them into a feedback loop of ever-increasing rhetoric, driving up engagement “at any cost.” They don’t have a problem with the content existing, they just don’t want you to see it.

Meta Doesn’t Consider Transphobia Hate Speech.

Meta has stated on multiple occasions that “hate speech has no place on our platforms,” which at first doesn’t seem to be consistent in practice with how they operate regarding transphobic sentiment on their platform. I think the more apt issue at hand is that Meta doesn’t see Transphobia as hate speech. Accounts that routinely use common racist terms or slurs eventually get suspended from the platform after enough reports or flags on their account from identified offending content. However, many accounts that consistently use transphobic language or terms continue to maintain their account and posting privileges.

There was a momentary blip in Meta’s content curation algorithm that seemed to affect its newest Twitter/X alternative, Threads. In it, users were seeing content that seemed unusually outside of their curated niche, specifically, they were getting a lot more hateful content, and showing it to those most sensitive to that content. For example, Transgender content creators saw an increase in anti-trans content in their “for you” feed. The counter was also true: users with anti-trans sentiment were seeing an increase in posts created by Trans users, allowing them to engage with it on a massive scale easily. The outcry from this was eventually addressed and rectified. Most assumed this meant the users were getting policed more consistently and their hateful content was being removed—it isn’t. It’s just not shown to you anymore.

If you want more evidence of this stance, take a look at how they are proposing to limit or lessen the algorithm’s boosting of Meta’s definition of “Political” content. In February, they published a statement regarding their approach to Political Content;

If you decide to follow accounts that post political content, we don’t want to get between you and their posts, but we also don’t want to proactively recommend political content from accounts you don’t follow. So we’re extending our existing approach to how we treat political content – we won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads

They defined “political content" as anything related to things like Law, Elections, or Social Topics. You know, social topics like the existence of transgender people or the erosion of rights.

If political content – potentially related to things like laws, elections, or social topics – is posted by an account that is not eligible to be recommended

Meta is a large company, and if they truly wanted to fix this issue, they would. Unfortunately, it isn’t a priority for them or their shareholders. The only thing on their mind right now is capitalizing on 2 fronts of social media power vacuums opening up. The implosion of X under Elon Musk and the potential outright ban of TikTok by the US government. Both of these leave a wide open door for Meta to take center stage once again in the social media landscape with both Instagram Reels and Threads, the federated text-based alternative to X.

The Russian Interference

If you are familiar with Meta's track record regarding social media moderation in general, none of this shouldn’t come as a surprise. We should all know by now how foreign adversaries utilized Facebook’s lack of consistent moderation to boost dissenting rhetoric and sow discord and unrest among the population. Even after Meta was brought to multiple congressional hearings over their platform’s involvement, it continues to be so poorly moderated that misinformation and rampant hate speech flourish on the platform to this day. It is one of the primary drivers that helped elect Donald Trump, boosting some of his most egregious conspiracy theories.

It was found in numerous reports that Russia was using Facebook and other social media platforms in order to boost Donald Trump and get him elected. Their insistence on election meddling is almost entirely unhindered by the platform and their algorithm prioritizing engagement over authenticity.

Wrapping Up

This blatant disregard for how the platforms under Meta handle social issues and moderation is unconscionable. A concerted effort must be made to overhaul these systems, ensuring they are robust enough to handle the complexities of modern hate speech and misinformation. Until then, users are left navigating a digital landscape that is, at best, selectively curated and, at worst, dangerously unregulated.

The question remains: if you don’t see the hate, is it still there? Without substantial change, the answer is a resounding yes, hidden in plain sight, thriving in the shadows of algorithms that are designed to look the other way.

Reply

or to participate.