Language is a living thing. Though we might not think about this very much in our daily lives, the languages we speak change with us every single day. At its core, language is just a tool to facilitate communication, and as our lives as humans change—from early agricultural societies to cities with complex economies to the digital age of fast-paced technological advancement—so too must our language.
If you want proof, look at some text written in Old English. Technically, it’s still English, but the language has changed so profoundly over hundreds of years that Old English is impossible to read without training or translation into modern English.
Even on a smaller scale, however, English is still evolving and shifting to meet our needs and help us communicate with each other more effectively. Words are constantly being invented or repurposed to better serve our increasingly technological society. If you’ve been on social media at all, you’ve actually probably encountered a major example of current linguistic change without even realizing it—it’s called algospeak.
What Is Algospeak?
Algospeak is a word so new that as I type this on Microsoft Word, my computer is underlining every instance in red thinking I must have made some error. It comes from the word “algorithm,” you know, that thing that nebulous thing to which all content creators on social media platforms cater. The algorithm decides which posts to promote and show to users, and which get buried, all using a complicated set of rules and criteria. Part of the criteria involves screening posts which contain sensitive content or language, and to do that, social media platforms use AI to flag and censor certain words deemed to reflect sensitive content.
Algospeak refers to the adjustments in language that content creators use to circumvent these attempts at censorship. For example, words like “dead” or “killed” are often flagged by social media AI because of their association with violence. To get around this, content creators have started using words like “unalive” or “unalived” to convey the same meaning without having their content censored. After the overturning of Roe v. Wade, Internet users began to refer to abortions as “camping” to avoid censorship or legal repercussions.
The onset of the pandemic also gave rise to new terms in algospeak, with creators on TikTok and other similar platforms referring to it as the “Panini” or “Panda Express” to avoid being targeted by AI that downranked videos mentioning the pandemic in an effort to curtail the spread of misinformation.
There are no real rules with algospeak; the replacement words chosen sometimes have phonetic or thematic similarities to the original word, as with “panini” and “pandemic,” but others, such as sex workers referring to themselves as “accountants” have less obvious connections. In a way, it harkens back to a Thieves’ Cant or Cockney rhyming slang. Technically still English, but with a modified vocabulary that helps disguise one’s true meaning from uninitiated, or unfriendly ears.
So…What’s the Issue?
Aside from being a fascinating case study for language nerds like myself, the existence of algospeak highlights issues related with censorship, and demonstrates how we’re all still trying to figure out how to balance freedom and safety online as the internet becomes more and more integral to our daily lives.
On one hand, some of the algorithmic AI censorship on social media can keep vulnerable or impressionable internet users, like children, from being unnecessarily exposed to potentially disturbing content. On the other hand, such censorship can make it more difficult to have important discourse about certain sensitive topics because of outright bans on certain words.
The fact that I’m writing this article in the first place means that the people in charge of social media platforms are probably already working on a response to algospeak, and maybe words like “unalive” will be added to the list of “bad words”, only for new iterations and workarounds to spring up. One thing is clear: there won’t be an easy solution to such a nuanced problem.
It’s an incredibly complex issue to solve, and I won’t pretend like I have all the answers, but it is still a fascinating look into how we as a society are still figuring out what life on the digital frontier looks like, and where we will place certain boundaries for ourselves. Only time will tell what kind of changes this will impart onto our languages. And who knows? Maybe someday Trusted Translations will have to start offering translation services into and out of algospeak!
Photo by Дмитрий Хрусталев-Григорьев on Unsplash