- Why Riot and Ubisoft?
- A powerful additional tool
- The secret is in the Blueprints
- The video game as a spearhead
Although we are immersed in the speech wars of consoles, as evidenced by the fief between Sony and Microsoft for the purchase of Activision-Blizzard by the latter, the truth is that video game studies usually collaborate a lot. It must be understood that workers usually change the company after each project, and that many of these studies share graphic engines or development assets.
However, there are times when collaborations go even beyond issues related to particular video games. An example of this is Zero Harms in Comms, an initiative created by Riot Games and Ubisoft. League of Legends and Assassin’s Creed developers will carry out a joint effort when creating artificial intelligence that limits insults and toxicity in their video games.
In MGG Spain we have been able to ask Wesley Kerr, director of Technological Research at Riot Games, as well as Yves Acquire, executive director of Ubisoft La Forge. Thanks to their answers we have been able to know more about one of the projects that, without changing the gameplay of their titles, can make users more.
Why Riot and Ubisoft?
The first question was forced. Great companies within the world of the video game there are many, but that one settled in France and Canada one with another from California, it is strange even at this time when telecommunications allow this type of interactions. And, of course, everything started with conversations between the two protagonists of our interview.
Back in spring of this year 2022, Yves and Wesley had some talks about how each of their companies detect toxic behaviors in online games. Discussing how complex this process is, both reached the same conclusion: to develop an efficient AI for this task, they would have to join.
It must be said that both companies are part of the Fair Play Alliance, an initiative that combines a lot of development studies that seek to end verbal abuse in video games. It would be fair to say that this breeding ground, of which both Riot and Ubisoft are part, was the real start of a project that opened its doors last July.
According to Kerr, Ubisoft was an obvious partner for Riot Games. The reason is that it has a base of players diverse and different enough from that of the creators of Lol and Valorant, which greatly enriches artificial intelligence. Similarly, Kerr states that both companies similarly value issues such as the privacy of players’ data, a key factor when making this association.
A powerful additional tool
Even if we do not have training in Big Data, artificial intelligences and other technical issues, the truth is that we can perfectly understand what Riot and Ubisoft are looking for with Zero Harms in Comms. It is a project that is trying to train an AI with a lot of toxic messages, to later proceed to detect them in live games.
The question is then what will happen to those users who have verbally abused others. JAC let me tell me that the project is an advance, but that not all work is done: We can see this project as another tool in a toolbox. The most important thing for Riot and for Ubisoft is the safety of the players, so How to provide a positive experience. That is what we focus on this initiative: create better detection tools, reliable and fast. We do not believe there is a single solution.
On those other tools, some that already exist, Kerr speaks to us. He comments that League of Legends, for example, has a rewards’ system for good behavior called honor. However, the role of this specific initiative is to especially detect behaviors that are not easy to identify. Toxic players get more and more ways to humiliate their peers and rivals that can go through simple insult filters.
The secret is in the Blueprints
A small curiosity that it had when talking about this project is the amount of information that AI is capable of swallowing, that is, what volume of messages are managing these two Zero Harms in Comms studies. Acquire takes me to another point and tells us that a point comes that the number of messages is no longer a variable that is worth expanding.
However, it is much more interesting to give artificial intelligence a wide variety of messages, what Blueprints call (something like footprints in Spanish). This step is very complex, because if these types of messages are not defined correctly, the project can fail. But perhaps more important, it is necessary to define them without giving concrete data of messages, so that the most rigorous confidentiality controls are met.
It goes without saying that these messages will be written, but I have wanted to ask about the possible future of applications of this type of voice chats such as Valorant. Kerr states that what they learn with text messages can surely be applicable to voice chats, but that the project is limited to those of the first type.
Acquire brings us an example of Rainbow Six, Ubisoft’s competitive shooter. He says that not only grammar and the words used are important when establishing that a message is abusive, but the context must be taken into account. Telling an opponent that you will end him, it can be part of the fantasy offered by the game and have a place within the created world.
The video game as a spearhead
They do not know what the future will hold, and they do not want to venture to whether more companies will join the initiative, but they do tell us that this is something they fantasized when they started with it. There are problems such as the issue of sharing information between companies without violating data protection agreements that are acquired with players, but the arrival of new companies could be interesting for creators.
And perhaps the most relevant, at least for server, is how tools can be used with these latest generation algorithms in matters beyond video games… as social networks, for example. Although they do not want to get wet, because their area of knowledge is video games, they do recognize something particularly interesting: several of the ideas adopted in this project are born from recent studies on social networks toxicity.
In fact, they believe it is easier to perform this filtration task in video games. In social networks it is very difficult to establish the context behind a message. In video games, you have a game and the players react to what just happened. Therefore, although it is a titanic task, it has registered to which Riot and Ubisoft are holding on to improve our games.