Twitch builds toward a ‘layered’ safety approach with new moderator tools

2 years ago 121

Moderating an online community is hard, often thankless work — and it’s even harder when it happens in a silo.

On Twitch, interconnected channels already informally share information on users they’ve prefer to keep out. The company is now formalizing that ad hoc practice with a new tool that lets channels swap ban lists, inviting communities to collaborate on locking serial harassers and otherwise disruptive users out before they can cause problems.

In a conversation with TechCrunch, Twitch Product VP Alison Huffman explained that the company ultimately wants to empower community moderators by giving them as much information as possible. Huffman says that Twitch has conducted “extensive” interviews with mods to figure out what they need to feel more effective and to make their communities safer.

Moderators need to make a ton of small decisions on the fly and the biggest one is generally figuring out which users are acting in good faith — not intentionally causing problems — and which ones aren’t.

“If it’s somebody that you see, and you say ‘Oh, this is a slightly off color message, I wonder if they’re just new here or if they are bad faith’ — if they’ve been banned in one of your friend’s channels, it is easier for you to go, ‘yeah, no, this is probably not the right person for this community,’ and you can make that decision easier,” Huffman said.

“That reduces the mental overhead for moderators, as well as more efficiently gets someone who’s not right for the community out of your community.”

Within the creator dashboard, creators and channel mods can prompt other channels they’d like to trade lists of banned users with. The tool is bi-directional, so any channel that requests another streamer’s list will be sharing theirs in return. A channel can accept all requests to share ban lists or only allow requests from Twitch Affiliates, Partners and mutually followed channels. All channels will be able to swap ban lists with up to 30 other channels, making it possible to build a pretty robust list of users they’d prefer to keep out, and channels can stop sharing their lists at any time.

Twitch shared ban list

Channels can choose to either automatically monitor or restrict any account that they learn about through these shared lists, and they’ll be restricted by default. Users who are “monitored” can still chat, but they’ll be flagged so their behavior can be watched closely and their first message will be highlighted with a red box that also displays where else they’ve been banned. From there a channel can opt to ban them outright or give them the all-clear and switch them to “trusted” status.

Twitch’s newest moderation tools are an interesting way for channels to enforce their rules against users who might prove disruptive but potentially stop short of breaking the company’s broader guidelines prohibiting overt bad behavior. It’s not hard to imagine a scenario, particularly for marginalized communities, where someone with bad intentions could intentionally harass a channel without explicitly running afoul of Twitch’s rules against hate and harassment.

Twitch ban evasion and shared ban list

Twitch acknowledges that harassment has “many manifestations,” but for the purposes of getting suspended from Twitch that behavior is defined as “stalking, personal attacks, promotion of physical harm, hostile raids, and malicious false report brigading.” There’s a gray zone of behavior outside of that definition that’s more difficult to capture, but the shared ban tool is a step in that direction. Still, if a user is breaking Twitch’s platform rules — and not just a channel’s local rules — Twitch encourages a channel to report them.

“We think that this will help with things that violate our community guidelines as well,” Huffman said. “Hopefully, those are also being reported to Twitch so we can take action. But we do think that it will help with the targeted harassment that we see impacting, in particular, marginalized communities.”

Last November, Twitch added a new way for moderators to detect users trying to skirt channel bans. That tool, which the company calls “Ban Evasion Detection,” uses machine learning to automatically flag anyone in a channel who is likely to be evading a ban, allowing moderators to monitor that user and intercept their chat messages.

The new features fit into Twitch’s vision for “layered” safety on its platform, where creators stream live, sometimes to hundreds of thousands of users, and moderation decisions must be made in real-time at every level.

“We think that this is a powerful combination of tools to help deter chat-based harassment proactively [and] one of the things that I love about this is that it’s another combination of humans and technology,” Huffman said. “With ban evasion detection, we are using machine learning to help find users that we think are suspicious. With this, we are leaning on the human relationships and the trusted creators and communities that they have already established to help provide that signal.”

Twitch’s content moderation challenge is a crucible of sorts, where dangerous streams can reach an audience and cause harm as they unfold in real-time. Most other platforms focus on after-the-fact content detection — something is posted, scanned by automated systems or reported, and that content either stays up, comes down or gets tagged with a user or platform-facing warning of some kind.

The company is evolving its approach to safety and listening to its community, meditating on the needs of marginalized communities like the Black and LGBTQ streamers that have long struggled to carve out a safe space or a visible presence on the platform.

Black creators are sick of being targeted & harassed on @Twitch.

COC has been working with Black streamers to put an end to the harassment, and #Twitch still won’t listen.

We demand #TwitchDoBetter because Black people deserve to be safe off & online.https://t.co/enjqNYFaEw pic.twitter.com/d5bYi7FqTQ

— ColorOfChange (@ColorOfChange) March 24, 2022

In March, Color of Change called on the company to step up its efforts to protect Black creators with a campaign called #TwitchDoBetter. The trans and broader LGBTQ community have also pressured the company to do more to end hate raids — where malicious users flood a streamer’s channel with targeted harassment. Twitch sued two users late last year for coordinating automated hate campaigns to deter future bad actors.

Ultimately, smart policies that are evenly enforced and improvements to the toolkit that moderators have at their disposal are likely to have more of a day-to-day impact than lawsuits, but more layers of defense can’t hurt.

“For a problem like targeted harassment, that is not solved anywhere on the internet,” Huffman said. “And, like it is in the non-internet world, it is a forever problem — and it’s not one that has a singular solution.

“What we’re trying to do here is just build out a really robust set of tools that are highly customizable, and then put them in the hands of the people who know their needs best, which are the creators and their moderators, and just allow them to tailor that suite of tools to meet their particular needs.”

 

Read Entire Article