
Policymaking for Online Safety
Online Safety Research by The Alan Turing Institute’s public policy programme
We are facilitating the translation of research into practice.
OBJECTIVES FOR POLICYMAKING FOR ONLINE SAFETY
Our research on policymaking for online harms aims to achieve three objectives.

ROUTES
Exploring routes available for ensuring online safety, including regulation and bottom-up initiatives.

OBSTACLES
Understanding the challenges and
obstacles in policymaking and regulation for ensuring online safety.

SUPPORT
Supporting policymakers and regulators in their work to keep people safe online.
REPORTS & POLICY SUBMISSIONS
This report aims to contribute to our understanding of online hate in the context of the requirements of the revised Audiovisual Media Services Directive (AVMSD) for Video Sharing Platforms (VSPs) to protect the general public from incitement to hatred or violence. However, online hate is complex and it can only be fully understood by considering issues beyond the very specific focus of these regulations. Hence, we draw on recent social and computational research to consider a range of points outside VSP regulations, such as the impact, nature and dynamics of online hate. For similar reasons, we have considered expressions of hate across a range of online spaces, including VSPs as well as other online platforms. In particular, we have closely examined how online hate is currently addressed by industry, identifying key and emerging issues in content moderation practices. Our analyses will be relevant to a range of experts and stakeholders working to address online hate, including researchers, platforms, regulators and civil society organisations. Full report
Health-related misinformation risks exacerbating the COVID-19 public health crisis if it leads the public to refuse treatment when needed, not follow official guidance, such as policies on social distancing and mask-wearing, or even to use harmful ‘miracle’ cures. If left unchecked, misinformation could seriously undermine the vaccine rollout and heighten people’s anxiety and mistrust during this time of national stress. Several large-scale research projects have started during the crisis with the aim of understanding the nature, prevalence and spread of health-related misinformation online. However, relatively little is known about who is vulnerable to believing false information and why. This is crucial for developing more targeted and effective interventions which tackle the root causes of misinformation rather than just its symptoms. To address this gap, researchers from The Alan Turing Institute’s public policy programme have conducted original research using a survey and assessments to understand (1) which individuals are most vulnerable to believing health-related falsities and (2) the role played by the content that individuals are exposed to. Full report
Online hate is a ‘wicked problem’ in the truest sense: it is difficult to define, knowledge is incomplete and contradictory, solutions are not straightforwardly ‘good’ or ‘bad’, and it is interconnected with many other problems in society. Good scientific research can help to address these wicked problems but, for too long, those on the frontlines in the fight against online hate (including civil society advocates, policy makers, regulators
and politicians) have not fully benefited from academic research. This situation urgently needs to be rectified so that academic expertise is leveraged to better inform how online hate is tackled, its effects minimised and support provided to victims. Through interviews and discussions with a range of stakeholders, as well as events and workshops, literature surveys and new empirical research, researchers at The Alan Turing Institute’s Public Policy Programme have developed a six-point research agenda. This is intended as one step towards achieving the goal of policy-oriented and problem-driven academic research into online hate.
- Online hate has serious and long-lasting impact on victims, their communities and societies at large. More research is needed into its effects.
- Research into online hate often does not engage with the needs of society. It needs to be solution-driven and informed by the concerns and priorities of stakeholders.
- Research into online hate needs to be flexible and responsive, balancing longterm studies with insights that have immediate impact.
- Online hate will always be a contentious area of research – definitions should be stated clearly, and all assumptions made explicit.
- Data intensive technologies are not a silver bullet. If they are to be used, they must be used responsibly.
- A positive vision of the Internet must be articulated and defended. These agenda points lead us to three recommendations which we believe will foster the kind of high impact, solution-oriented research that is needed to address the growing problem of hate speech.
Summary of the Public Policy Programme’s submission
The Public Policy Programme’s response addresses 8 of the 18 Consultation Questions as well as 7 additional issues. We are open to engaging further in the development of a regulatory framework for online harms and welcome any questions regarding our response.
Overall, the White Paper marks an important step forward in achieving better regulation of the Internet and shows the UK’s commitment to being at the forefront of responsible Internet governance. The broad message is commendable: “We cannot allow these harmful behaviours and content to undermine the significant benefits that the digital revolution can offer […] If we surrender our online spaces to those who spread hate, abuse, fear and vitriolic content, then we will all lose.” (p.3) However, several issues are left unresolved, of which two are particularly important:
- The White Paper advocates creating a new independent regulator. However, existing regulators have accumulated much of the expertise needed in dealing with data-intensive digital platforms that the regulation of online harms will require. We recommend that a new unit with a specific remit for online harms is established within one of the existing regulators, such as Ofcom or the ICO.
- The discussion of ‘harms’ in the White Paper requires additional nuance and clarity. It should include a high-level explanation of what constitutes a harm, how differing harms will be prioritised, and how their impact will be assessed. This will help the regulatory unit to act in a targeted and proportionate manner and provide more certainty to stakeholders.
Our response also includes discussions of key social issues raised in the White Paper, such as provisions for protecting freedom of expression, protecting worker welfare, determining what is ‘true’ online, and the need for a joint-up approach which considers how harmful content moves between platforms.
Online abuse, which includes both interpersonal attacks, such as harassment and bullying, and verbal attacks against groups (usually called ‘hate speech’), is receiving more attention in the UK (HM Government 2019; SELMA 2019; The Law Commission 2018). It poses myriad problems, including inflicting harm on victims who are targeted, creating a sense of fear and exclusion amongst their communities, eroding trust in the host platforms, toxifying public discourse and motivating other forms of extremist and hateful behaviour through a cycle of ‘cumulative extremism’ (Eatwell 2006). Understanding the prevalence of online abuse is crucial for addressing more complex and nuanced issues, such as what its causes are, when and where it manifests, what its impact on society is and how we can challenge it. The Home Office and Local Communities secretaries captured this point in 2018: ‘Hate crime is a complex issue […]. In order to tackle it, we need to understand the scale and nature of the problem, as well as the evidence about what works in tackling it.’ (Home Office 2016). At a time when the UK Government is considering greater regulation of online harms, building an appropriate evidence base is key. However, to date relatively little attention has been paid to this fundamental question: How much online abuse is there? Part of the challenge is that, at present, the data, tools, processes and systems needed to effectively and accurately monitor online abuse are not fully available and the field is beset with terminological, methodological, legal and theoretical challenges (Brown 2018; Davidson et al. 2019; Vidgen et al. 2019). And, despite the hype about computational tools for the automated monitoring of online behaviour, algorithms alone will not resolve the challenge of how to best detect and measure online abuse (Ofcom 2019c). As Facebook CEO Mark Zuckerberg reported during the 2018 American Senate hearings on disinformation, ‘Hate speech – I am optimistic that, over a 5 to 10-year period, we will have AI tools that can get into some of the nuances […] But, today, we’re just not there.’ (The Washington Post 2018) In this policy briefing paper from The Alan Turing Institute’s Hate Speech: Measures and counter-measures project, we estimate the prevalence of online abuse within the UK by reviewing evidence from five sources: (i) UK Government figures, (ii) reports from civil society groups, (iii) transparency reports from platforms, (iv) measurement studies, primarily from academics and thinktanks and (v) survey data. We also present previously unpublished results from the Oxford Internet Survey (OxIS) 2019. In some cases, UK-specific evidence cannot be attained and evidence from other countries or global reports are used, which is flagged where needed. Full report