
The Online Harms Observatory
Online Safety Research by The Alan Turing Institute’s public policy programme
We are bridging the gap between methodological robustness and practical relevance.
OBJECTIVES FOR THE ONLINE HARMS OBSERVATORY
The Online Harms Observatory aims to achieve four objectives to ensure online safety.

SCOPE
Understanding the scope and prevalence of content and activity that could inflict harm against high priority groups.

IMPACT
Understanding how harmful online content and activity impact victims, their communities and society.

MOTIVATION
Understanding the motivations of people who create and share online content that could inflict harm on people online.

INTERVENTION
Creating, implementing and evaluating interventions for tackling content that could inflict harm on people online.
RECENT UPDATES
The first tracker in the Observatory focuses on the abuse faced by football players in the men’s Premier League. Ofcom commissioned an in-depth report off the back of this first set of dashboards. Key findings from the 2021/2022 season include:
- A small proportion of players receive the majority of abuse. For instance, 12 players account for 50% of all Abusive tweets. Cristiano Ronaldo and Harry Maguire receive the largest number of Abusive tweets.
- The majority of players received abuse at least once. 68% of players received at least one Abusive tweet during the period (418/618). One in 14 (7%) received abuse every day.
- A small proportion of players receive the majority of abuse. For instance, 12 players account for 50% of all Abusive tweets. Cristiano Ronaldo and Harry Maguire receive the largest number of Abusive tweets.
Areas being explored for our next trackers include abuse directed at MPs, female journalists and women playing in the Premier League.
WHAT IS THE ONLINE HARMS OBSERVATORY?
The Online Harms Observatory is a new platform which will provide real-time insight into the scope, prevalence and dynamics of harmful online content. It will be powered by a mix of large-scale data analysis, cutting-edge AI and survey data.
This exciting new resource will leverage our innovative research to help policymakers, regulators, security services and other stakeholders better understand the landscape of online harms. It will focus on online hate, personal abuse, extremism and misinformation.

WHY DO WE NEED AN ONLINE HARMS OBSERVATORY?
Online spaces are increasingly vulnerable to hazardous online activity. Misinformation risks exacerbating the effects of COVID-19 through anti-vaxx movements; marginalised, subordinated and otherwise vulnerable groups are increasingly being harassed; and conspiracy theorists are being given oxygen to spread divisive and harmful messages
To effectively challenge and counter the harmful impact of toxic content, whilst still ensuring fundamental rights like freedom of expression remain protected, we need to understand it. This is not an easy task. In one of our previous reports we described online hate as a “wicked problem”, and the same can be said of nearly all online harms.
The Observatory will house the methodologies, data, analytical tools and social insights needed to create a step change in intelligence gathering for harmful online content. This is increasingly important as regulatory, legal and civic pressure to tackle online hazards accelerates.
METHODOLOGY
The Online Harms Observatory is powered by our custom built Artificial Intelligence (AI) trained to automatically detect online abuse. Our Methodology Note explains how we use AI to detect abusive language, details the models that underpin the Online Harms Observatory and their known weaknesses.
DASHBOARD DEMO
Could the Online Harms Observatory benefit your work? Write us at onlinesafety@turing.ac.uk for more information.
