
People
Researchers on the Turing’s Online Safety Team
Meet our team of researchers
Our team sits within the public policy programme, and is headed by Dr. Bertie Vidgen and Professor Helen Margetts.
Organisers

Professor Helen Margetts -Director of the Public Policy Programme
Helen Margetts is a Turing Fellow and Director of the public policy programme at The Alan Turing Institute, and Professor of Society and the Internet at the University of Oxford and Professorial Fellow of Mansfield College. From 2011 to 2018, she was Director of the Oxford Internet Institute, a multi-disciplinary department of the University of Oxford dedicated to understanding the relationship between the Internet and society, before which she was UCL’s first professor of Political Science and Director of the School of Public Policy (1999-2004).

Dr Jonathan Bright – Head of Online Safety
Dr Jonathan Bright is joint head of Online Safety and AI for Public Services at the Turing Institute. Before joining the Turing he was an Associate Professor at the Oxford Internet Institute.
Researchers (A-Z)

Angus Redlarski Williams -Senior Data Scientist
Angus is a Senior Data Scientist in the Online Safety Team at The Alan Turing Institute, working on developing the Online Harms Observatory and contributing to research around Data-Centric AI for Online Safety. He completed his Master’s Degree in Computer Science at the University of Bristol.

Francesca Stevens – Research Assistant
Francesca is a Research Assistant in the Online Safety Team at The Alan Turing Institute. Francesca holds a Master’s degree in Criminology and Criminal Justice from City, University of London where she received a Distinction. She is also currently a PhD candidate at City University where she is exploring online harms in the workplace. Previously she was a Research Assistant at UCL for two years, where she examined technology-facilitated abuse within Domestic Abuse and Intimate Partner relationships.

Hannah Kirk – Research Assistant
Hannah is a Research Assistant in the Online Safety Team at The Alan Turing Institute and a doctoral candidate in Social Data Science at the Oxford Internet Institute, University of Oxford. Her research focuses on data-centric techniques, such as active and adversarial learning, to optimise the development of AI models to detect online harms, while maintaining a duty of care to users and moderators. She completed Master’s Degrees at Peking University and the University of Oxford, and holds a Bachelor’s degree in Economics from the University of Cambridge.

Dr Florence Enock – Research Associate
Florence is a Research Associate in the Online Safety Team at The Alan Turing Institute. Her research background is in social cognition, with a particular focus on intergroup biases. Before joining the Turing, Florence was a postdoctoral researcher at the University of York, where she studied the role of dehumanization in social attitudes and behaviours. Florence completed her DPhil in Experimental Psychology at the University of Oxford, which examined attentional prioritisation of information relating to the self and social ingroups.

Liam Burke – Data Scientist
Liam is a Data Scientist in the Online Safety Team at The Alan Turing Institute, where he works on building the Online Harms Observatory. Before joining the Turing, Liam worked at a startup developing privacy-enhancing technologies for unstructured data. He completed his Bachelor’s Degree in Computer Science at the University of Warwick.

Pica Johansson – Research Assistant
Pica works on policymaking for online harms and is helping build the Online Harms Observatory. Pica holds a Master’s Degree in Media, Data and Society from the London School of Economics and Political Science, where she graduated with an excellence award for achieving the highest mark in her program cohort. Her research at the LSE focused on white supremacist discourse, taking specific interest in understanding how language changes over time. Prior to joining the Turing, Pica worked as a communications analyst at the UN Refugee Agency.

Dr Scott Hale – Research Fellow
Dr Scott A. Hale is a Turing Fellow whose cross-disciplinary research develops and applies new techniques in the computational sciences to social science questions and puts the results into practice with industry and policy partners. He is particularly interested in multilingual natural language processing, computational sociolinguistics, mobilization/collective action, agenda setting, and misinformation and has a strong track record in building tools and teaching programmes that enable wider access to new methods and forms of data.

Dr Yi-Ling Chung – Research Associate
Yi-Ling is a research associate at The Alan Turing Institute, and a PhD candidate in Information and Communication Technologies at the University of Trento and Fondazione Bruno Kessler. Her doctoral research focused on hate speech mitigation via data curation and knowledge-grounded counter-narrative generation in a multilingual scenario, employing external knowledge resources and information retrieval. She is interested in counter-argument generation, computational social science, and applying natural language processing for social good.