A Consumer Protection Approach to Platform Content Moderation

Submission/proposal/advocacy/recommendation

Website/link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3408459

Website/link: Visit website

Published date: June 26, 2019

Author: Mark MacCarthy

Subject tag: Algorithmic systems | Child safety | Terrorism and violent extremism

Proposes a consumer protection approach to platform content moderation that would bring in legislation that, among other things, would “require platforms to have a content moderation program in place that contains content rules, enforcement procedures and due process protections including disclosure, mechanisms to ask for reinstatement and an internal appeals process, but it would not mandate the substance of the platform’s content rules. It would respond to strict First Amendment scrutiny as a narrowly crafted requirement that burdens speech no more than necessary to achieve the compelling government purpose of preventing an unfair trade practice. In addition, or alternatively, the FTC might be authorized to use its deception authority to require platforms to say what they do and do what they say in connection with content moderation programs. The FTC would treat failure to disclose key elements of a content moderation program as a material omission, and the failure to act in accordance with its program as a deceptive or misleading practice. moderation.”
[This entry was sourced with minor edits from the Carnegie Endowment’s Partnership for Countering Influence Operations and its baseline datasets initiative. You can find more information here: https://ceip.knack.com/pcio-baseline-datasets]