A recent study has shown that players of the Call of Duty games are the most likely to use negative language when interacting with other members of the fanbase. The data collected by Wordtips showcases the number of negative words per 1,000 used by 100 independent Twitter users dedicated to certain gaming fanbases, and Call of Duty has made the top of the list. With a staggering 184 out of 1,000 words holding a negative association, approximately 18% of what these Twitter users were saying was marked as negative or offensive.
These findings, of course, come as a surprise to absolutely no one; Call of Duty players have come under fire — pun intended — for years due to their reputation for toxicity. Bullying and harassment are commonplace in live chat rooms, which has amassed heavy criticism from others online. It doesn’t help that Activision’s morals are fundamentally shaky to start with; both the company and its fanbase have been heavily criticized for their lax attitude towards harassment in the gaming and corporate spaces. Call of Duty has become a laughingstock on social media due to this persistent phenomenon, with jokes and memes running rampant in all corners of the internet.
Related: The worst Call of Duty games, ranked
However, Call of Duty isn’t the only community in hot water. The study named a few other fandoms notable for using negative language, including the Mortal Kombat and Sonic the Hedgehog fanbases. The continuous difficulties players face with bullying and harassment have become a glaring issue within the community, with numerous companies cracking down on the use of negative language in chat rooms in an effort to protect their players. Words considered offensive or derogatory can be filtered out in text chat, while innovations in AI technology have been implemented to monitor live chat and report bullying or harassment in verbal conversations.
While these anti-harassment policies seem to be an earnest effort to help the community, they serve as a temporary fix to a larger issue: why these people are going out of their way to harass and berate others in the first place. Reporting, blocking, and banning are great tools to protect those affected, but understanding what prompts people to use violent language online is an issue that won’t be fixed by filtering out dirty words.