By Ariadna Matamoros-Fernández and Aleesha Rodriguez
We live in deeply unequal societies where certain groups, such as racial and sexual minorities, continue to experience structural oppression. Humour targeted at these groups can cause individual harm through its cumulative effects, and contribute to broader social harms too.
However, a more challenging problem is the conduct of users who aren’t necessarily trying to harm others, but still participate online in ways that can do so. For example, TikTok users have participated in viral parody challenges that trivialise police brutality, domestic violence and even the Holocaust.
The COVID-19 health crisis pushed digital platforms to curb the spread of misinformation, but it seems they did less to minimise anti-Asian content – despite signs the pandemic was being “racialised”.
In our research, we investigated how the “humorous” racist stereotyping of people of Asian descent emerged on TikTok during the pandemic, and how such behaviour should be addressed.
TikTok and racial humour
TikTok has become hugely popular across generations. Its “use this sound” feature allows users to remix audio from other videos, making it a unique platform to study racist stereotyping.
For our research we collected TikTok videos posted from January to June in 2020, with the hashtag #coronavirus, and other hashtags relevant to our research (such as “#asian” and “#funny”, for example).
We also included videos tagged with keywords related to China (#china, #chinacoronavirus, #wuhan) and with #Australia, to potentially collect examples from within the country (which has a history of anti-Asian racism).
Once we removed duplicates, unavailable videos, and videos in a language other than English, we obtained a dataset of 639 TikTok videos. After closely analysing these, we found 93 videos displayed examples of racist humour.
‘Yellow peril’ memes
Among the videos were “yellow peril” memes. These were about people or objects being “contaminated” with coronavirus by extension of their connection to China, or other Asian countries. The “yellow peril” trope dehumanises people from Asian countries by posing them as a threat to Western countries.
Three types of “yellow peril” memes were noticed in our sample:
- memes targeting people of Asian descent as being the cause of coronavirus spreading
- memes where people react in horror or disgust when they receive packages or goods from China
- memes that blame the coronavirus on practices such as eating wild animals.
“Digital yellowface” parodies
We also found a form of “digital yellowface”. In these videos users applied the “use this sound” feature to parody Asian accents in English or say “Asian sounding words” by speaking gibberish, or words like “Subaru” (the Japanese car brand) in an exaggerated way.
Some users dramatised their face to further embody the offensive caricature they were trying to portray.
Scholars researching racist stereotyping online have warned that “certain dialects, vocal ranges, and vernacular are deemed noisy, improper, or hyperemotional by association with blackness”.
During COVID-19, non-Asian users appropriated “Asian sounds” on TikTok in a similar way. They portrayed people of Asian descent as irrational or overly emotional, reducing an entire racial group to a mere caricature.
What has TikTok done?
TikTok has enabled users to willingly or unwillingly contribute to racist discourse that dehumanised Chinese people, and other Asian people, over the course of the pandemic.
We are not claiming a direct causal link between this racist stereotyping and real-world violence. But research has shown attaching an illness to an historically marginalised group has immediate and longer-term negative social effects in societies.
Although TikTok joined the European Commission’s Code of Conduct on Countering Illegal Hate Speech Online in 2020, its policies still do not provide a detailed explanation of when humour can have the capacity to harm.
To improve the moderation of harmful humour, TikTok could modify its community guidelines and reporting processes to acknowledge the way humour targeted at historically-marginalised groups can have severe consequences.
This would be similar to Facebook’s expansion of its hate speech policy in 2020 to include harmful stereotypes (which came after the platform consulted with advocacy groups and experts).
TikTok’s moderation of racialised harmful humour doesn’t necessarily have to entail takedowns and user bans. There are several other remedies available. The platform could:
- educate users by tagging or labelling dubious or potentially harmful content
- reduce the visibility of content through algorithmic demotion
- restricting engagement functionalities on “humorous” content that’s likely to cause harm.
One thing’s for sure: we can no longer excuse racism under the guise of humour. Beyond individuals, social media platforms have a responsibility to make sure they address racist humour, since it can and does cause real harm.
Ariadna Matamoros-Fernández, Senior Lecturer in Digital Media at the School of Communication, Queensland University of Technology and Aleesha Rodriguez, Research Fellow at Australian Research Council Centre of Excellence for the Digital Child, Queensland University of Technology