Round 2 report

DIGITAL WILDFIRE: (MIS)INFORMATION FLOWS, PROPAGATION AND RESPONSIBLE GOVERNANCE

DELPHI PANEL ROUND 2 REPORT

 

INTRODUCTION

Thank you once again for completing Round 2 of the Digital Wildfire Delphi. This questionnaire was sent to everyone who completed Round 1 of the Delphi. It was designed to provide panellists with the opportunity to record the strength of their agreement or disagreement with views on the usefulness of ‘digital wildfires’ as a concept and with views on whom, if anyone, ought to be responsible for governing social media communications. It also gave respondents an opportunity to rate the technical and political feasibility of alternative methods for governing these communications. This report briefly summarises the responses we received. As with Round 1 we received a range of diverse, and often strongly expressed, opinions but certain patterns and preferences also emerged.

SUMMARY FEEDBACK ON RESPONSES TO EACH QUESTION IN ROUND 2

Q2.1. The usefulness of the concept of ‘digital wildfire’

Opinion was divided over the usefulness of the concept of digital wildfires. In the free text responses those in favour of the use of the term described how ‘digital wildfire’ can engage and enlighten various audiences. Those against highlighted that it is ambiguous – it can relate to different things and be interpreted differently by different audiences. This means that it is limited in its usefulness, for instance to policy debates. A slight majority was in favour of rejecting the concept. There was also a stronger consensus in favour of replacing the concept with a focus on specific offences (defamation, libel, incitement, obscenity etc.) but this was balanced against opinion that cautioned against reducing the problem of harmful social media communications to issues of criminal law enforcement. Free text comments pointed out that ‘harm’ is also an ambiguous term and that not all forms of harm are (or should be) illegal.

For example:

All terms are ambiguous. The relevant question is whether a term enlightens more than confuses or vice versa. In my judgment, the notion that moral panics and twitter mobs might be compared to a wildfire is an enlightening metaphor. Care should be taken not to think it is much more than a helpful metaphor--at least not without much more argument/analysis

The term 'digital wildfire' is extremely unhelpful for the purposes of regulation as it does not say anything about the content itself. Moreover, content may be harmful yet perfectly legal

Existing offences may not capture the range of offences that could arise - for example, stalking has only recently been officially recognised as an offence, similarly domestic violence (which admittedly is not a 'social media' offence - but could possibly become one ...)

"Harmful" is hopelessly ambiguous--far more worthless than the metaphor of a digital wildfire. Not all "harm" is actionable. Communities should be free to establish their own norms, and in these contexts, "harm" will have to be nailed down more precisely so everyone knows where they stand

Q2.2. The responsibility for regulating social media communications

Responses to this section showed a greater clarity in the distribution of opinion. A strong consensus emerged against limiting responsibility for regulation to law enforcement with free text comments arguing that 1) law enforcement should not be the only form of regulation relating to social media/digital wildfires and 2) alternate forms of regulation already exist. A strong consensus also emerged in favour of social media platforms having a responsibility for regulation. An even stronger consensus emerged over the primary responsibility of users themselves for reducing harmful communications. Opinion appeared split over the responsibility of institutions (especially schools and workplaces). Views opposing the statutory obligation of institutions to promote the responsible use of social media were based on this kind of provision being unfavourable/unethical and impractical. Views in favour were based on how valuable this could be.

For example

I would argue against most "civil and criminal laws" with respect to speech, but one may "regulate" communications at her own party by asking rude guests to leave, etc.

There is a role for social media providers to regulate the use of their sites, if it iis done openly and transparently

Self-regulation for the users, and co-regulation for the Social Media Platforms. That is the best way forward

Some self-regulation is required as making only those with law enforcement responsibilities the only actors is not scalable to the problem.

How can one assess whether families' have met their 'statutory' obligations? What does the 'responsible use' of social media look like? Is organising a protest group (regardless of the subject of the protest) irresponsible use? It is easier to measure whether schools etc. have promoted responsible use (but 'responsible' can be very subjective, and often political).

[Institutional responsibility] is essential. With the world now fully engaged on social media it has become an important part of life that must be part of education

Q2.3. The technical and political feasibility of regulating social media communications

We conducted statistical analyses of the responses to this section, in which the 17 panellists who responded to the R2 questionnaire rated potential methods of social media regulation in terms of how technically and politically feasible they viewed them to be (see Figures 2.3.1. and 2.3.2.). Only two of the methods listed were regarded by a majority of the panel as both technically and politically feasible:

· triggering the self-regulation of users through educational programmes and public campaigns regulating social media communications; and

· use of report buttons to trigger censorship of harmful communications by platforms.

Overall the weight of opinion across the panel was to regard the governance and regulation of social media through methods of criminal prosecution, cautioning and administrative penalties or disrupting social media communications, as both technically and politically unfeasible.

KEY: Technical/Political feasibility 1 = definitely feasible; Technical/Political feasibility 2 = probably feasible; Technical/Political feasibility 3 = may or may not be feasible; Technical/Political feasibility 4 = probably unfeasible; and Technical/Political feasibility 5 = definitely unfeasible. For full statement of questions refer to the questionnaire panellists received for Round 2. 

These panel-wide views were reflected in the free text responses used by some panellists to justify the reasoning behind their responses to questions about the technical and political feasibility of regulating social media communications. Respondents frequently highlighted the difficulties of regulating social media content due to:

1) the international status of internet communications;

2) the often anonymous nature of these communications; and

3) the large volume of posts made.

Concerns were also expressed about the need to limit state control and avoid the removal of freedom of speech. Respondents often pointed out that, even if a measure is technically or politically feasible, it is not necessarily the ‘right’ thing to do and might not be effective in practice. Particular obstacles referred to as limiting the use of some of the measures listed in this section were:

1) the ambiguous/subjective nature of harm as a concept;

2) lack of an established understanding of what is meant by duty of care; and

3) the absence of ‘neutral’ judges who might adjudicate instances of harm.

For example

It would not be possible to expect [social media platform] providers to stop wildfires as they developed given that they would have no means of taking a view on the truth or otherwise of a rumour

…strong arm or overly excessive attempts to legislate [social media communications] could impact innovation of social platforms, and public backlash on over intrusive government

Licensing social media platforms is highly likely to lead to strong opposition from free speech groups and social media platforms.

I'm not convinced efforts to pressure platforms to conform to any one conception of "care to the users" is ever appropriate.