Rundown:
- NVIDIA Corporation was recently awarded a patent for a system that automatically detects and handles inappropriate language in online video games by monitoring and analysing gameplay session audio data.
- The system aims to enhance user experience and create safer video game environments by censoring toxic behaviour and offensive language in real-time.
- Inappropriate language is classified based on categories such as profanity, abuse, taunting, derogatory language, or harassment, and corresponding actions are taken, such as editing or modifying audio portions and generating automatic reports to video game developers or hosts.
- The system utilises language models trained with video game-specific content, allowing it to identify inappropriate language not only in English but also in other languages.
- While the patent offers a promising solution to combat toxic behaviour, it is uncertain whether or how video game companies will incorporate this system into their video game franchises.
Earlier today, we came across a recently published patent from NVIDIA Corporation titled, “AUTOMATIC CLASSIFICATION AND REPORTING OF INAPPROPRIATE LANGUAGE IN ONLINE APPLICATIONS,” filed in October 2022 under the name of NVIDIA Corporation. The patent, published earlier this month, describes a system and method for automatically detecting and handling inappropriate language in online applications, specifically focusing on online video games.
“In various examples, game session audio data – e.g., representing speech of users participating in the game – may be monitored and/or analyzed to determine whether inappropriate language is being used. Where inappropriate language is identified, the portions of the audio corresponding to the inappropriate language may be edited or modified such that other users do not hear the inappropriate language,” reads the abstract for the patent.
“As a result, toxic behavior or language within instances of gameplay may be censored – thereby enhancing the user experience and making online gaming environments safer for more vulnerable populations. In some embodiments, the inappropriate language may be reported – e.g., automatically – to the game developer or game application host in order to suspend, ban, or otherwise manage users of the system that have a proclivity for toxic behavior.”
Traditionally, players who experience offensive language or abusive behaviour in online video games have to manually report each instance to the video game platform or developer. This process can be time-consuming and discouraging for players, leading to underreporting of such incidents.
Additionally, players may have to endure toxic behaviour before they can generate a report. The proposed system aims to overcome these challenges by automatically monitoring and analysing audio data from gameplay sessions, specifically the speech of participating players. The goal is to create a safer and more enjoyable video game environment by identifying and addressing toxic behaviour and offensive language.
The audio data is processed in real-time or near real-time to determine if inappropriate language is being used. If inappropriate language is detected, the system takes various actions to mitigate its impact. One action is editing or modifying the portions of the audio containing inappropriate language so that other players do not hear it.
According to the claims made by NVIDIA, the system involves determining the sequence of characters corresponding to audio data, classifying a portion of the character sequence as inappropriate, identifying the corresponding portion of the audio data, and executing actions on that portion of the audio data.
Additionally, the models used for classification are associated with files of the application, possibly stored within those files, such as files being included in a directory, hidden from view, or protected from deletion. Therefore, the system’s implementation may vary for each individual video game. The system can also generate reports automatically, notifying the video game developer or host about the presence of inappropriate language.
This allows for the suspension, banning, or management of players who engage in toxic behaviour. In addition, other actions such as editing the audio, annotating the textual representation or sending data to a host device can also be taken. The classification of inappropriate language is based on categories like profanity, abuse, taunting, derogatory language, or harassment.
Moreover, the patent emphasises that the classification models utilised incorporate a language model that has been trained using content relevant to the specific video game. This implies that the system possesses the capability to identify inappropriate language not only in English but also in other languages. The training data for these models comprises a designated dictionary of words marked as inappropriate for the particular video game, granting video game developers the authority to determine what they consider to be inappropriate.
The system identifies characters (letters, symbols, numbers) spoken by players and examines them to determine if they correspond to offensive or inappropriate words or phrases. By analysing the characters, the system can pinpoint the exact timestamps when the offensive language is spoken. Based on these timestamps, the system can take various actions with the audio data, such as generating a report or muting the offensive word during playback.
In severe cases, it can even remove the offensive portion of the audio entirely. By taking these actions in real-time, the system ensures that offensive language is filtered out before it reaches other players, reducing the impact of toxic behaviour. This censorship helps protect players from toxic behaviour and creates a more positive video game experience.
NVIDIA Corporation’s patent offers a promising method to tackle toxic behaviour in online video games through automated detection, classification, and handling of inappropriate language. However, it is essential to remember that this is currently just a patent and does not guarantee its implementation. It remains uncertain how or if video game companies will choose to incorporate the proposed system into their existing and upcoming video game franchises, and only time will reveal their intentions.
In the past few months, the impact of artificial intelligence has been evident across various industries, including the transformative influence of generative artificial intelligence systems like ChatGPT. The video game industry has also witnessed this revolution firsthand. NVIDIA Corporation, renowned for its achievements in areas such as cloud gaming and self-driving cars, appears to be at the forefront of the artificial intelligence race.
What do you think about this? Do tell us your opinions in the comments below!
Similar Reads: Honda Working On AR Games In Vehicles To Entertain Passengers
Good job! Please give your positive feedback 😏
How could we improve this post? Please Help us. 💡
From writing short stories in his room to finding true enthusiasm for video game and computer hardware journalism, Huzaifa plays video games and write all the latest and greatest news about them. Currently pursuing a Bachelor of Science degree in Data Science, he dives deep into the news, authenticating every tiny detail to serve his audience. When he’s not breaking news, he becomes a master storyteller, conjuring up captivating tales from the depths of his imagination. With a wealth of experience as a Video Game Journalist. He has also worked with Publishers like eXputer, The Nerd Mag and Gamesual making him an expert in Gaming News Industry.