December 1, 2021
Subject tag: Algorithmic systems | Child safety | Privacy and data protection | Terrorism and violent extremism
Twitch is a live-streaming service. The vast majority of the content that appears on Twitch is gone the moment it’s created and seen. That fact requires us to think about safety and community health in different ways than other services that are primarily based on pre-recorded and uploaded content. Content moderation solutions that work for uploaded, video-based services do not work, or work differently, on Twitch. Through experimentation and investment, we have learned that for Twitch, user safety is best protected, and most scalable, when we employ a range of tools and processes, and when we partner with, and empower, our community members.
The result is a layered approach to safety – one that combines the efforts of both Twitch (through tooling and staffing) and members of the community, working together. It starts with our Community Guidelines, which balance user expression with community safety, and set expectations for the behavior we want on Twitch. Creators are expected to uphold these service-wide standards in their channels, and are invited to raise the bar if they choose. We provide creators with tools to set, communicate and enforce the standards of behavior in their channel. We also provide viewer-level controls that enable viewers to control the content they see. At the same time, Twitch applies various technologies to proactively detect and remove certain kinds of harmful content before users ever encounter it. Finally, we empower users to report harmful or inappropriate behavior to Twitch. These reports are reviewed and acted on by a team of skilled and trained professionals who can apply service-wide enforcement actions.
[This entry was sourced with minor edits from the Carnegie Endowment’s Partnership for Countering Influence Operations and its baseline datasets initiative. You can find more information here: https://ceip.knack.com/pcio-baseline-datasets]