March 19, 2021
Subject tag: Data Access | Platform policies
Transparency mandates should be flexible enough to accommodate widely varying platform practices and policies. Any de facto push toward standardization should be limited to the very most essential data.
The most important categories of data are probably the main ones listed in the DSA: number of takedowns, number of appeals, number of successful appeals. But as my list demonstrates, those all can become complicated in practice.
It’s worth taking the time to get legal transparency mandates right. That may mean delegating exact transparency rules to regulatory agencies in some countries, or conducting studies prior to lawmaking in others.
Once rules are set, lawmakers should be very reluctant to move the goalposts. If a platform (especially a smaller one) invests in rebuilding its content moderation tools to track certain categories of data, it should not have to overhaul those tools soon because of changed legal requirements.
We should insist on precise data in some cases, and tolerate more imprecision in others (based on the importance of the issue, platform capacity, etc.). And we should take the time to figure out which is which.
Numbers aren’t everything. Aggregate data in transparency reports ultimately just tell us what platforms themselves think is going on. To understand what mistakes they make, or what biases they may exhibit, independent researchers need to see the actual content involved in takedown decisions. (This in turn raises a slough of issues about storing potentially unlawful content, user privacy and data protection, and more.)
[This entry was sourced with minor edits from the Carnegie Endowment’s Partnership for Countering Influence Operations and its baseline datasets initiative. You can find more information here: https://ceip.knack.com/pcio-baseline-datasets]
Part of the Freeman Spogli Institute for International Studies, the Cyber Policy Center houses the Global Digital Policy Incubator, a forum for multi-stakeholder collaboration on policies concerning the application of[...]
Subject tag: Advertising | Algorithmic systems | Disinformation and misinformation | Privacy and data protection | Public Policy | Research | Terrorism and violent extremism