Delphi findings

Policing Abusive Social Media: Enforcement and Non-Enforcement Responses to Legal, Illegal and ‘Grey’ Communications

The third work package of the Digital Wildfires project recruited key informants about the challenges of reducing the harms associated with abusive social media communications, particularly amongst adolescents, whilst also seeking to protect the positive freedoms of speech enabled by this new technology. Given this tension, key informants from the police, criminal legal practice, education and social media platforms were asked to participate in a ‘policy Delphi’ to discuss the political and technical feasibility of policing abusive social media communications. The policy Delphi is a deliberative method in social science, which seeks to identify key points of agreement and disagreement about a problem through iterative rounds of debate and dialogue. It is an especially useful method for investigating problems about which there is a great deal of uncertainty and which are evolving rapidly, as in innovations in digital technologies such as social media, that are ‘disrupting’ established ways of thinking, such as the enforcement of criminal law as a remedy for abusive behaviour.

· The first round of the Digital Wildfires policy Delphi asked respondents open ended questions about the main characteristics of this problem and what, if any, challenges they raise for policing and public protection, whether these ought to be policed at all and, if so, how.

· The second round asked respondents to rank their agreement or disagreement about the technical and political feasibility of the various policing strategies that had been identified in the first round.

· The third and final round asked respondents to forecast which scenarios for policing abusive social media communications they thought the most likely given views expressed in the second round about the technical and political feasibility of different policing strategies.

The key points of agreement were that:

· Criminal prosecution is highly unlikely to be the principal objective of policing abusive social media communications given its expense and limited capacity to respond to the volume and speed of these communications;

· Cultivating self-regulation through educational programmes, especially for school pupils and their families, is likely to be the principal objective of public protection against abusive social media communications given that users and their immediate social circles are in the best position to respond to the volume and velocity of these communications;

· Accommodating abusive communications is likely to be the de facto outcome of policing strategies given the highly variable capacity for self-regulation particularly amongst adolescents who are most vulnerable to these communications.

Key areas of disagreement were that:

· Disrupting abusive communications through algorithms that automatically censors abusive communications is both politically as well as technically feasible;

· Disrupting abusive communications through human censors, employed by social media platforms to ‘edit’ these communications, is both technically and political feasible;

· Vulnerability to abusive communications can be reduced through altering the ‘choice architecture’ of the technologies for accessing the internet (limiting the access that particular devices have to the internet at all or to certain sites on the internet for particular periods of the day or night) and/or by altering the settings of online services (limiting communications to private networks of known ‘friends’).