We’ve had Enough: Verbal Abuse in Online Gaming and How to end it

by. Jongwon Lee | 202 Views (122 Uniq Views) | about 2 years ago
#Informatics #Game
Social and Ethical Aspects of Big Data - Op-Ed Piece
PDF
Instructed by: Dr. Younei Soe

The online gaming environment is gaining more components in a way that resembles real life. On top of conventional text chatting, voice chatting is taking place as a new trend in online gaming. However, gamers being able to interact with each other better is not all positive. Verbal abuse in online gaming is plaguing the community. 

Verbal abuse in online gaming refers to language delivered in a game that is negative, violent, and hateful. The Anti-Defamation League (ADL) survey in 2020 found out 81% of adult online gamers in the U.S. experienced harassment in online games and 68% claimed to have experienced severe harassment, such as physical threats, identity-based discrimination, and stalking. Moreover, online verbal abuse impacts personal lives going beyond the online game and it’s increasing each year [AL20]. However, game companies ignore the seriousness of this problem. 

While the situation is nowhere tolerable, the response from giant game companies in 2022 is quite disappointing. In GameRiv, Soumyo Deb shared that Riot Games, one of the largest online gaming companies, decided to system-mute the player during that game when the player types offensive chat [Deb22]. Although muting the players will temporarily prevent other players encountering offensive chats, this fails to tackle the essence of the issue: why are people getting mad and who will be the ones that will type offensive chats? 

The ADL divides the root causes of verbal abuse in online gaming into four categories: in-game, out-of-game, limits of digital spaces, and other factors [AL20]. Among many causes, we need to focus on changing the in-game factors such as game system design and communication methods to address this issue. This is because broader causes such as out-of-game factors and the other factors such as the population’s education level are difficult to change. Also, limits of digital spaces factors are inevitable in online gaming since its the nature of online games. In the Metro UK, Anugraha Sundaravelu states that racism is prevalent in online gaming because individuals can hide behind anonymity and thus they face no consequences unlike in real life [Sun22]. 

I believe game companies need to actively engage in coming up with solutions that will terminate verbal abuse caused by their game system design. The solutions that will mitigate this long-standing problem must be innovative and achievable. Tony Xiao claims gaming companies should step up to address this issue by imposing penalties that gamers actually care about [Xia19]. Although giving a penalty can be one way to prevent gamers from verbally abusing others, this does not deal with the root cause of the problem. I believe the root cause of this comes from players experiencing losing situations typically caused by their teammates they think perform worse than the opponent. Riot Games announced that they will start manually monitoring high-ranked games that players reported have toxic players [Zai22]. Although, we can see that they recognize the problem and are putting some effort in, manually looking into games is too costly and thus will not solve the problem.

First, I believe game companies should develop fairness measures that aim to provide a gaming experience that is fair for as many players as possible. To be specific, when games assign different roles to players and assign players to teams, this should be done fairly. For this idea to be achievable, I believe a measurable fairness scale must exist. Developing and adopting these fairness measures will ultimately reduce the situation where players feel they are matched with bad teammates.

Moreover, game companies should adopt a data-driven player scoring system. This algorithm should reward gamers who accurately report verbally abusive and toxic players and disregard reports that were filed without objectivity. In other words, the system should filter reports from gamers who report random players or specifically the players that had a conflict with that reporting player or simply performed poorly. Players that were reported multiple times will be warned that they have low scores and will receive punishment. Conversely, players that receive less amount of reports and retain high scores will be highly rewarded. A similar idea can be found in Uber’s predictive algorithms that identify risky drivers. Uber monitors the behavioral and emotional state of the driver to score drivers and predicts dangerous drivers [Lin21]. Since it is impossible to track gamers behind the monitor, we need players to file reports and use that data to score the players. 

Some may believe data-driven solutions are not achievable due to technical issues. Ian King from Bloomberg states gaming companies are not applying technologies to filter verbal abuses when the technologies are proven to be feasible by researchers [Kin22]. E.M. Rogers’ diffusion of innovation (DOI) theory explains how, over time, an idea or product gains popularity and diffuses through society. However, adoption does not happen simultaneously for everyone [Rog62]. Likewise, I believe data-driven solutions such as fairness scale and reporting systems may not change the game right away but over time be successful. The major innovators in the gaming industry should put their budget and resources into these data-driven methods and if verbal abuse reduces, this will become a convention for other game companies as well. 

Game companies need to come up with innovations to terminate the verbal abuse rampant in the world they created and govern. Data-driven approaches such as fairness scales and player scoring systems will deal with the root cause. Online games should exist for people to relieve stress from reality, and should not demolish people’s lives. Unlike what some gamers or developers perceive, the consequences of verbal abuse are damaging and thus we need to actively engage in catching the abusers. 

References 
[AL20] Fair Play Alliance and Anti-Defamation League. Disruption and harms in online gaming framework. 2020. 
[Deb22] S. Deb. League of legends now mute toxic teammates automatically in patch 12.20. GameRiv, 2022. 
[Kin22] I. King. Why does technology still fail to root out abuse in gaming? Bloomberg, 2022. 
[Lin21] B. Lin. Uber patents reveal experiments with predictive algorithms to identify risky drivers. The Intercept, 2021. 
[Rog62] E. M. Rogers. Diffusion of innovations. Free Press of Glencoe, 1962. 
[Sun22] A. Sundaravelu. Black gamers are coming together online to tackle racism in gaming. Metro UK, 2022. 
[Xia19] T. Xiao. Confronting toxicity in gaming: Going beyond “mute”. New York Times, 2019. 
[Zai22] S. Zaim. Riot begins high mmr moderation trials in league of legends na server. GameRiv, 2022.