June 1, 2021
United States of America (USA)
Subject tag: Algorithmic systems | Terrorism and violent extremism
On January 6, 2021, an armed mob stormed the US Capitol to prevent the certification of what they claimed was a “fraudulent election.” Many Americans were shocked, but they needn’t have been. The January 6 insurrection was the culmination of months of online mis- and disinformation directed toward eroding American faith in the 2020 election.
US elections are decentralized: almost 10,000 state and local election offices are primarily responsible for the operation of elections. Dozens of federal agencies support this effort, including the Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security, the United States Election Assistance Commission (EAC), the FBI, the Department of Justice, and the Department of Defense. However, none of these federal agencies has a focus on, or authority regarding, election misinformation originating from domestic sources within the United States. This limited federal role reveals a critical gap for non-governmental entities to fill. Increasingly pervasive mis- and disinformation, both foreign and domestic, creates an urgent need for collaboration across government, civil society, media, and social media platforms.
The Election Integrity Partnership, comprising organizations that specialize in understanding those information dynamics, aimed to create a model for whole-of-society collaboration and facilitate cooperation among partners dedicated to a free and fair election. With the narrow aim of defending the 2020 election against voting-related mis- and disinformation, it bridged the gap between government and civil society, helped to strengthen platform standards for combating election-related misinformation, and shared its findings with its stakeholders, media, and the American public. This report details our process and findings, and provides recommendations for future actions.
The Stanford Internet Observatory is a cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social media. Under[...]
Subject tag: Algorithmic systems | Child safety | Terrorism and violent extremism
The Digital Forensic Research Lab (DFRLab) at the Atlantic Council is a first of its kind organization with technical and policy expertise on disinformation, connective technologies, democracy, and the future[...]
Subject tag: Data Access | Disinformation and misinformation | Government transparency | Privacy and data protection | Terrorism and violent extremism
Graphika provides companies with tools to analyze online conversations and detect disinformation campaigns. Its lab division partners with academic institutions to advance its social network analysis, machine learning, and artificial[...]
Subject tag: Disinformation and misinformation | Research