Sexual violence and algospeak, or my case against grape

tw: sexual violence

I recently published a paper examining people’s motivations for using obfuscations like “yt” for “white”, “nip nops” for “nipples”, and “grape” for “rape” on TikTok, which you can read here. In the paper we build on some fantastic linguistics research into this phenomenon1 to survey and statistically model people’s motivations. To summarise, it seems this behaviour is mostly driven by the type of content people are posting, being common for content which is believed to be moderated by the platform. However for certain demographics, particularly Black users, this also represents an opportunity to be creative with language.

I have no wish to tell people to stop being creative with language. However my personal opinion is that use of obfuscation when discussing sexual violence is very harmful. I am concerned for a number of reasons. One, use of obfuscations makes it harder both to find this content, for those seeking solidartiy, or to hide this content, for those looking to avoid triggers. Second, the self-cencorship enforces the taboo around discussing these issues. Finally, to me this language play risks trivialising a serious issue. I personally find it incredibly disconcerting to hear journalists, influencers or other people discussing sexual violence using terms like “grape”, “mascara”.

I would much rather accurate language was used, allowing me to choose what to engage with. It is feasible social media platforms do algorithmically suppress content related to these topics that use these words, and I acknowledge that this may hide content intended to educate, empower and support. However I do not think we should “give in” to this by self-censoring. Rather I would like robust discussion between social media platform providers and sexual abuse advocacy groups.

  1. https://scholarworks.iu.edu/journals/index.php/li/article/view/37371 

Updated: