The Online Harms Observatory
Online Safety Research by The Alan Turing Institute’s public policy programme
We are bridging the gap between methodological robustness and practical relevance.
OBJECTIVES FOR THE ONLINE HARMS OBSERVATORY
The Online Harms Observatory aims to achieve four objectives to ensure online safety.
Understanding the scope and prevalence of content and activity that could inflict harm against high priority groups.
Understanding how harmful online content and activity impact victims, their communities and society.
Understanding the motivations of people who create and share online content that could inflict harm on people online.
Creating, implementing and evaluating interventions for tackling content that could inflict harm on people online.
WHAT IS THE ONLINE HARMS OBSERVATORY?
The Online Harms Observatory is a new platform which will provide real-time insight into the scope, prevalence and dynamics of harmful online content. It will be powered by a mix of large-scale data analysis, cutting-edge AI and survey data.
This exciting new resource will leverage our innovative research to help policymakers, regulators, security services and other stakeholders better understand the landscape of online harms. It will focus on online hate, personal abuse, extremism and misinformation.
WHY DO WE NEED AN ONLINE HARMS OBSERVATORY?
Online spaces are increasingly vulnerable to hazardous online activity. Misinformation risks exacerbating the effects of COVID-19 through anti-vaxx movements; marginalised, subordinated and otherwise vulnerable groups are increasingly being harassed; and conspiracy theorists are being given oxygen to spread divisive and harmful messages
To effectively challenge and counter the harmful impact of toxic content, whilst still ensuring fundamental rights like freedom of expression remain protected, we need to understand it. This is not an easy task. In one of our previous reports we described online hate as a “wicked problem”, and the same can be said of nearly all online harms.
The Observatory will house the methodologies, data, analytical tools and social insights needed to create a step change in intelligence gathering for harmful online content. This is increasingly important as regulatory, legal and civic pressure to tackle online hazards accelerates.
The Online Harms Observatory is powered by our custom built Artificial Intelligence (AI) trained to automatically detect online abuse. Our Methodology Note explains how we use AI to detect abusive language, details the models that underpin the Online Harms Observatory and their known weaknesses.
OPTION 1 (LINK + DOWNLOAD)
OPTION 2 (EMBEDDED PDF – THE HIEGHT OF THE WINDOW IS EASY TO CHANGE)
Could the Online Harms Observatory benefit your work? Write us at firstname.lastname@example.org for more information.