Create long-lasting protections for people or intervene to protect them from digital threats

Reduce the power of private platforms, combat incivility and misinformation online by:

Improving Content Moderation. Calls for new regulatory policies around content moderation at large intermediaries acknowledge content moderation remains an opaque and difficult practice, and on its own is not a fix-all solution. Current policies at the largest intermediaries attempt to balance stakeholder expectations (including users, consumers, advertisers, shareholders, the general public), commercial business goals, and jurisdictional norms and legal demands (which are generally governed by liberal-democratic (US) notions of “free speech”), goals related to inclusive and participatory democracy are not included.

The most common ‘workable solution’ presented as it relates to content moderation are processes that combine technical and social (human) responses. However, advances in semi- or fully automated systems, including deep learning, show increased promise in identifying inappropriate content and drastically reducing the number of messages human moderators then need to review. In the literature however, researchers note that neither automated nor manual classifications systems can ever be “neutral” or free from human bias. Human and/or automated content moderation is unlikely to achieve “civil discourse,” a “sanitised” internet or other speech and engagement goals through moderation alone. Therefore, the combination of automated classification and deletion systems and human efforts remains the most effective content moderation strategy currently on offer. In the few places where they exist government regulations on private intermediaries’ moderation practices have not been empirically tested for their efficacy or effectiveness.

Combat fake news by:

A multi-stakeholder content moderation. This is an approach that combines human and technical intervention, however this is a proposed but untested solution.

Reduce hate speech/trolling by:

Using identity verification systems. Sites that do not allow anonymisation and force pre-registration have been shown to solicit qualitatively better, but quantitatively fewer, user comments because of the extra effort required for engaging in discussion. Empirical research has also found that abusive comments are minimised when anonymous commenting is prohibited.