About
Hi! I’m Eddie, I use he/they pronouns. I hold a Bachelors in Linguistics and a Masters in Psychological Sciences. I will soon be graduating with a PhD in Natural Language Processing (NLP) from the School of Informatics at the University of Edinburgh.
I developed an interest in algorithmic fairness during my three year as a digital marketing strategist. I was able to pursue this interest at post-graduate level thanks to the Centre for Doctoral Training in NLP at the University of Edinburgh, where I was supervised by Björn Ross, Vaishak Belle and Zachary Horne.
My work addresses the public response to biased NLP technologies. It is only by understanding how those impacted by these technologies behave that we can develop suitable bias mitigation methods. My interests also lie in developing less biased models, and in the robustness of bias measurement methods. My thesis is a manifesto for a human-centric approach to studying bias - or rather, studying harms. On this blog you will find posts spanning my range of interests.
You can read selected works here:
🩵 Le$bean or lesbian? A survey of marginalised users’ motivations for obfuscation on TikTok.
🩷 Amplifying Trans and Nonbinary Voices: A Community-Centred Harm Taxonomy for LLMs.
🤍 “Till I can get my satisfaction”: Open Questions in the Public Desire to Punish AI
🩷 The Only Way is Ethics: A Guide to Ethical Research with Large Language Models Peer reviewed companion guide to an on-going Ethics Whitepaper, available here and welcoming input here.
🩵Experiences of Censorship on TikTok Across Marginalised Identities - to appear at ICWSM ‘25
❤️ Just Because We Camp, Doesn’t Mean We Should: The Ethics of Modelling Queer Voices. Read my TL;DR here
🧡 Typology of Risks of Generative Text-to-Image Models
💛 Stereotypes and Smut: The Misrepresentation of Non-cisgender Identities by Text-to-Image Models
💚 This Prompt is Measuring MASK: Evaluating Bias Evaluation in Language Models Read my TL;DR here
💙 Potential Pitfalls With Automatic Sentiment Analysis: The Example of Queerphobic Bias Read my TL;DR here
💜 A Robust Bias Mitigation Procedure Based on the Stereotype Content Model Read my TL;DR here
You can also read about my work here:
5 things a trans scientist wants you to know about AI in the Washington Blade (available online at News is Out)
The lost data: how AI systems censor LGBTQ+ content in the name of safety in Nature Computer Science
Blog post on bias in sentiment analysis which draws on my paper
Interview with Queer in AI about transphobia in text-to-image models
And listen to me talk here:
Panel discussion at the Scottish AI Summit on queer identity and technology
Presentation at the Controversies in the Data Society Seminar Series where I argue measuring bias (in abstract) is pointless (also shared on the University of Edinburgh’s Open Educational Resources page).
Presentation at the Queer in AI workshop on my transphobia in TTI models paper
I have previously been invited to give talks at the Loch Lomond Future’s Group ‘AI and the National Park’ event on 22nd April 2025, and the University of Edinburgh Sex and Gender research meeting on the 21st May 2025.
I have also previously spoken at the British Council 90 Youth Voices Project the Controversies in the Data Society seminar series and the Social Data Science Hub Seminar Series.
If you find my work interesting, consider tipping me here: