Riot Games and Ubisoft team up on machine  learning to detect harmful game chat

Riot Games and Ubisoft shared machines-learning data so they can detect harmful chat in multiplayer games.

The “Zero Harm in Comms” research project is aimed at building better AI systems that can detect toxic behavior in games, said Yves Jacquier, executive director of Ubisoft La Forge and Wesley Kerr, director of software engineering at Riot Games. It aim is to build inter-industry alliances in order to study harm detection.This is the first project to be made up of the cross-industry research initiative. Both companies have a deep-learning network. These systems use AI to automatically take part in simulations, so that players can recognize that their attitudes are raging towards each other.

“We cannot solve it alone,” says Yves Jacquier, Executive Director for Ubisoft’s La Forge R&D Department. “We want to build the framework for this, share the results with the community, see how it goes, and bring in more people.” The key to the project lies within the sheer volume of data the duo is attempting to gather. With more data, these systems can theoretically gain an understanding of nuance and context beyond key words.

In the research project, both companies will share non-private player comments with each other to improve the quality of their neural networks and thereby get to more sophisticated AI quicker.Other companies are working on this problem — like ActiveFence, Spectrum Labs, Roblox, Microsoft’s Two Hat, and GGWP. The Fair Play Alliance also brings together game companies that want to solve the problem of toxicity. But this is the first case where big game companies share ML data with each other.

The first time Ubisoft has been using AI to detect toxic texts over the years. This number needs to rise.

PHP Code Snippets Powered By : XYZScripts.com