Ubisoft and Riot Games have joined forces for a new research project in the pursuit of making online video game spaces safer. Removing ‘harm’ – a complex word with many tendrils of meaning - in video game chats is crucial and ongoing work, and their project, Zero Harm in Comms, aims to ultimately develop a more nuanced and robust framework than anything else available to do just that. It’s an important goal, and one both publishers recognize can only be achieved as a team.
“We cannot solve it alone,” says Yves Jacquier, Executive Director for Ubisoft’s La Forge R&D Department. “We want to build the framework for this, share the results with the community, see how it goes, and bring in more people.”
So, How Does It Work?
At its core, the Zero Harm in Comms research project aims to create a shared database of anonymized data, used to train Ubisoft and Riot’s systems to detect and mitigate disruptive behavior. The idea was conceived between Jacquier and Wesley Kerr, Head of Tech Research at Riot Games, who are both interested in AI and deep learning, and the technical innovations within that space. While bonding over shared interests, and specifically shared challenges, it became clear that harmful content is a massive one, and both were unsatisfied with the solutions currently available.
“We agreed that the solutions that we can use today are not sufficient for the kind of player safety we have in mind for our players,” says Jacquier.
“We really recognized that this is a bigger problem than one company can solve,” says Kerr. And so how do we come together and start getting a good handhold on the problem we're trying to solve? How can we go after those problems, and then further push the entire industry forward?”
Yves Jacquier
Gathered from various chat logs in Ubisoft’s and Riot’s portfolio of games, the data – strings of text – are scrubbed clean of Personally Identifiable Information (PII) and personal information. They are then labeled by behavior – is this totally neutral, for example, or is it racism, or sexism? and used to train automated systems to better recognize and parse such harmful behavior from the jump.
The key to the project lies within the sheer volume of data the duo is attempting to gather. With more data, these systems can theoretically gain an understanding of nuance and context beyond key words.
“There are key words that can be immediately recognized as bad,” elaborates Jacquier. “However, it’s often much trickier to parse. For example, if you see ‘I'm going take you out’ in a chat, what does that mean? Is it part of the fantasy? If you’re playing a competitive shooter, it might not be a problem, but if it’s another type of game, the context might be totally different.”
Of course, it’s impossible to teach AI every possible harmful scenario, but the duo explain that the goal is to fine-tune their systems to look for these specific examples and detect them with high probability.
Wesley Kerr
Building a Better Industry
It’s a start, and one that both companies want to be extremely visible to their players to encourage a more welcoming gaming experience. “We want players to know we are taking action on this,” says Kerr.
“That visibility and that communication with the player is going to be critical for them to understand that this is happening in the background. They may not care how it's happening; they just want to know that things are improving, and things are getting better.”
Jacquier and Kerr have been working on the Zero Harm in Comms project for roughly six months, and plan to share their learnings and potential next steps with the broader industry next year. Both agree that creating a safer online environment is crucial in an age where everyone is online, and both reiterate the hope that more publishers will come abroad to move beyond the keyword model that has proven insufficient for so long.
“It's 2022,” says Jacquier. “Everyone is online, and everyone should feel safe, period.”
“This is a great first step and a very large task,” says Kerr. “We don't want to go at it alone.”