
Is Social Media Censorship Unaliving Language?
Season 5 Episode 7 | 7m 21sVideo has Closed Captions
Is TikTok ruining language?
TikTok users are altering their speech to evade algorithmic filters--but will this harm language in the long run?
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

Is Social Media Censorship Unaliving Language?
Season 5 Episode 7 | 7m 21sVideo has Closed Captions
TikTok users are altering their speech to evade algorithmic filters--but will this harm language in the long run?
Problems playing video? | Closed Captioning Feedback
How to Watch Otherwords
Otherwords is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Panini, palm-colored, pew pews, do you ever wish we could just unalive all this algospeak and say what we really mean?
Since TikTok hit the US in 2018, linguists and casual video fans alike have noticed that the app has had an unsettling effect on language.
Video creators who cover serious topics like death, politics, and race started talking about them using words that sounded comparatively silly.
- Global Panera Bread or Panda Express.
- Why?
Well, the apps community guidelines state that any content that it finds unsuitable for young users, like videos that contain sexual themes, violence, misinformation, or even express sadness, are ineligible for promotion via its recommendation algorithms.
So in order to keep their videos reaching new viewers, creators started subbing out words that might get flagged by these filters.
The COVID-19 pandemic became the panini.
Guns became pew pews.
Stories about mental health hospitalizations became grippy sock vacations in reference to the common hospital garb.
According to creators, this newer, cutesy-er terminology sounds closer on the surface to TikTok's mission to inspire creativity and bring joy.
But some worry that all this language being tailored to algorithms may make it harder for us to get our meaning across to actual humans.
Is all this self-censorship cause for panic?
I'm Dr. Erica Brozovsky, and this is "Other Words."
(upbeat music) - [Announcer] Other words.
- Even though algospeak, the lingo that social media creators use to evade automated moderation feels like a recent invention, self-censorship has had a place in language for bleeping ever.
Like we covered in our episodes on swearing and death, every language has taboo words and phrases.
Often curse words or words used to refer to death, sex and bodily functions are considered impolite to talk about directly.
So we use euphemisms or more indirect words and phrases in their place.
Whenever you ask for the restroom instead of the toilet, or say "Grandma's no longer with us" instead of "Grandma's dead," you're using linguistic self-censorship.
The social purpose is politeness.
Euphemisms save us from the discomfort of talking about difficult topics head on, or at least from thinking about what someone's actually doing when they visit the water closet or the powder room, but not all linguistic self-censorship is about being very demure and very mindful.
In the context of media, it can instead help us get away with being a little bit devious.
Way before social platforms came up with community guidelines, the US created a precursor to content moderation algorithms with the Federal Radio Commission in 1927.
By then, radio broadcasters had started to quickly play music over live radio actors whenever they went a little off script, but the Radio Commission and later the FCC had the power to fine TV and radio broadcasters for allowing obscene or profane speech onto their airwaves, making it all the more important to avoid taboos.
So those broadcasters got creative.
Live radio and later TV broadcasts started using that familiar (beeping) sensor to cover up unintended profanity.
And soon artists figured out that instead of rewriting their songs or scripts to be more tame, they could just include the bleep on purpose for comedic effect or to imply taboo language without racking up obscenity fees.
- So you can say P, you can't.
You can't say (beep) though, I don't think.
(audience laughing) - Similarly musicians have embraced record scratches and sampled sound effects to cover taboo lyrics and keep getting radio play.
笙ェ Straight outta Compton, 笙ェ 笙ェ Crazy (record scratching) named Ice Cube.
笙ェ - Comic strips adopted a similar tactic with grawlixes, those long strings of characters like asterisks and pound signs that suggest the character is swearing, and that's a tradition that continues with algospeak.
Creators aren't necessarily trying to be polite when they self-censor.
Many TikTokers report that they're trying to evade suppression or demonetization of their content, kind of like whispering behind the algorithm's back.
But with every new medium comes new ways of using language.
Linguistic anthropologist Kendra Calhoun and linguist Alexia Fawcett analyzed algospeak and the processes that creators use to self-censor.
We're not just talking bleeps and euphemisms here.
Popular tactics include replacing a word or letter in captions with symbols or emoji.
For example, LGBTQ+ creators have substituted a dollar sign for the S in lesbian, put diacritical marks over the A in gay or simply used emojis like rainbow hearts or the nail painting hand to allude to their identities.
Creators also replace letters in written words that have phonetic similarities when said out loud.
The popular substitute of seggs for the word sex is just subbing an unvoiced consonant sound, ka for a voiced one, ga.
In addition to individual sounds, algospeakers will replace whole words with rhyming words or homophones.
That's why you find creators who believe that TikTok suppresses links to other platforms telling viewers to click a link in bio or even a blink in lio.
The linguists described one of the more creative forms of self-censorship as prosodic templating, where users substitute words with similar rhythm or prosody.
That's where we get panini in place of pandemic or the playful Tumblr originated name for actor Bumblebbe Cabbage Patch, sorry, Benedict Cumberbatch.
The practice of self-censorship is also a moving target because TikTok does not publish an official banned words list, creators make guesses and share folk knowledge about what they think will be suppressed by algorithmic moderation, leading people to self-censor sometimes when they don't really need to.
And as sensors update, language used to evade it is constantly updated leading to a culture of linguistic creativity.
Like how the earlier example of dropping a dollar symbol into the word lesbian became le dollar bean from how text to speech tools read that self-censored form, or how creators extended the pandemic sound like panoramic or pandemonium to even sillier pop culture laced replacements like pon de replay or pandemilovato.
In this way, algo speech is a lot like Queer Argots or the coded language that marginalized groups like LGBTQ people have historically used to identify members of their own community without the rest of the listening public catching on.
The purpose of these Argots isn't to obscure information from speaker to listener, but to dodge certain people or bots in order to get information to the listener who is actually meant to hear it.
But for folks who worry that algospeak is harming our ability to communicate, you can take a deep breath because self-censorship, like all language, evolves.
Social media users will keep using and innovating algospeak as long as it helps them break through and have necessary conversations about sensitive topics.
But like TV bleeps, they may become less common as cultural taboos and platform regulations change.
If algospeak stops serving a communicative purpose, we'll keep on scrolling.
Support for PBS provided by: