Neuroscientist Jeffrey Lin wants to dramatically reduce people’s toxic behavior in online gaming communities, and he’s using artificial intelligence to do it.
When players experience persistent abuse or toxic behavior in a game, they are on average 320% more likely to leave that game and never come back. Toxic behavior isn’t just a conspicuous PR problem for the gaming companies; it costs them real money.
I’m not a gamer and I don’t generally write about gaming, but it’s clear to me that what Lin and his colleagues at Riot Games are doing deserves attention. They aren’t just shaping the future of gaming and online communities, they are demonstrating how artificial intelligence may one day be used to modify human behavior on very large scales.
Crowdsourcing the Judges
Last year, Riot took the Tribunal offline to revamp it. Lin recently outlined some of the reasons for that decision; the most interesting of which was that the Tribunal’s feedback was simply too slow. Decisions were taking a week or more, which proved too long of a separation between infraction and consequences for being reported on to even clearly remember what exactly they’d done, let alone meaningfully change their behavior.
The new system would need to be large-scale and provide immediate feedback — both things that machines do quite well.
League of Legends Judges Become Teachers
Over this last year, Riot has used some 100 million votes cast in Tribunal judgements to build an artificial intelligence system that automates responses to toxicity in League of Legends. Think of it as an artificial immune response system.
Riot has used machine learning techniques to extract patterns from the Tribunal and other massive datasets and teach an artificial intelligence system how to emulate the collective wisdom of its community. As a result, Tribunal volunteers effectively changed their job from judges to teachers. Their past judgements now form the basis of massive scale, real-time judgement engine, grounded in the very human values of the League of Legends community.
This new system is coming online in phases, the first being an “instant feedback system” designed to notify both reporter and reported with the system’s rulings. It’s not just some crude filter for profanity and offensive keywords, but something capable of understanding phrases, and by drawing on Tribunal voting history, emulating a nuanced understanding of what is and is not considered toxic behavior within the League of Legends community.
The new instant feedback system has only been up for a few months, but the results are already impressive. Lin recently noted that it is only generating about one mistake for every five thousand decisions: a .02% error rate.
Imagine, just for a moment, receiving this message after having lost your cool in an online game:
“Your peers judged your behavior to be far below the standards of the League of Legends community. Think through the conversation and reflect on your words. League is an intense, competitive game, but every player deserves respect.”
Remember, this judgement was carried out by an artificial intelligence – not a human. Can you sense that odd feeling? It’s the future knocking.
Crowdsourced Artificial Intelligence
What Riot Games is building is a prime example of an important new trend: “crowdsourced machine learning.”
Crowdsourced machine learning requires both scale and feedback loops. It’s no coincidence that most of today’s leaders in machine learning also happen to be giants like Google, Facebook, and Baidu, with Internet platforms with which to engage hundreds of millions of users in powerful feedback loops for machine learning.
League of Legends starts with game behavior data with a feedback loop from player-judges, and uses it to map community values. Google starts with third-party websites and creates a feedback loop generated by user clickthrough behavior from search results. The result is a Knowledge Graph. Facebook starts with end user posts and creates a feedback loop through likes and other forms of user engagement. The result is an Interest Graph.
Influencing Employee Behavior
It just takes a little creativity to see where all this might go. Imagine a coffee shop where baristas’ interactions with customers are recorded and translated into transcripts which are then matched to feedback from customer evaluations. Retail evaluation systems like this wouldn’t just catch toxic employee interactions with customers, but evaluate cash register rings against employee communications, body language, conflict resolution, speed and other variables. Think of it as artificial intelligence extending a company‘s best practices, policies and business rules.
There is real potential for these tools to introduce a frightening new “AI-driven Taylorism” into the workplace. If you think I’m overplaying the desire for kind of control over employee behavior, just consider the way call center employees are evaluated today.
This call may be recorded for quality assurance.
Changing Human Culture with Machines
Rather than end on that dystopian note, I think it’s important to highlight what is actually happening with Riot’s application of this kind of crowdsourced artificial intelligence.
The company has rooted its system design in a bottoms-up feedback loop, designed to emulate its stakeholders’ values, and there’s something very admirable about that. Sure, toxic behavior generates customer churn and that costs them money, but listening to Lin, these efforts also seem grounded in a bigger goal of healing the culture of online gaming.
The first time I heard about this project, I thought about “broken windows theory.” It’s the idea that small symbols of urban disorder –like broken windows– can create an atmosphere of perceived lawlessness where crime is to be expected. In this case, the goal is shifting the culture so that we no longer simply expect toxicity as a given in online gaming. In this sense, we’re talking about a very pragmatic, tractable approach to shifting human culture, and I think it’s worth studying.
Since turning on the new artificial intelligence a few months ago on League of Legends, something dramatic really has happened. The culture is shifting:
“As a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games. Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty.”
Anyone who’s ever suffered from online harassment or toxicity will immediately understand just how important this work is. That’s one of the reasons it matters.
This is groundbreaking work – a pragmatic demonstration of how to build values alignment into an intelligent system. Yes, it raises a number of difficult questions, but it should also give us hope that we may just figure out a way to seed the next intelligence on the planet with the echoes of our better angels.
Pingback: The Weekly Crunch: Zero-Waste Grocery Stores, Black Women's Business Boom, Robocops for Cyber-Bullies - Ecomentality
Hi,
Can you tell me how to get access to this dataset? I am trying to do a research project on cyber bullying and can’t find any good data.
Thanks
I would be surprised if Riot Games would be willing to share this data. Sorry, I have no special connection there.
Pingback: Digital Assistants and Breaking the Fourth Wall