The Threat Specifically
In the digital age, cybersecurity threats have evolved beyond traditional hacking and data breaches. A new form of cyber threat has emerged, one that leverages algorithmic content curation to potentially harm users. This form of threat is particularly prevalent on social media platforms, where algorithms curate content based on user behavior and preferences.
The TikTok Case
A recent study by the non-profit Center for Countering Digital Hate (CCDH) revealed that TikTok, a popular social media platform, may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account. The researchers set up eight new accounts in the United States, the United Kingdom, Canada, and Australia at TikTok’s minimum user age of 13. These accounts briefly paused on and liked content about body image and mental health. The CCDH found that the app recommended videos about body image and mental health about every 39 seconds within a 30-minute period.
Cybersecurity Implications
This phenomenon raises significant cybersecurity concerns. While not a cybersecurity threat in the traditional sense, this form of algorithmic content curation can have detrimental effects on users’ mental health. It can expose young users to harmful content, potentially leading to real-world harm. This makes it a cybersecurity concern as it involves the misuse of technology to cause harm to users.
Moreover, this issue highlights the broader implications of algorithmic bias and the potential for algorithms to perpetuate harmful behaviors or attitudes. In this case, the algorithm may inadvertently promote harmful content, contributing to a toxic digital environment for young users.
A New Form of Cyber Warfare?
This issue also raises questions about whether such practices could be considered a new form of cyber warfare or next-gen warfare. If a malicious actor were to manipulate these algorithms to promote harmful content intentionally, it could potentially be used as a form of psychological warfare.
The Response from TikTok
In response to the study, a TikTok spokesperson argued that the study does not reflect genuine behavior or viewing experiences of real people. They highlighted that TikTok regularly consults with health experts, removes violations of its policies, and provides access to supportive resources for anyone in need. They also noted that the platform continues to roll out new safeguards for its users.
While TikTok’s response is a step in the right direction, this issue underscores the need for greater transparency and accountability in how social media platforms curate and recommend content. It also highlights the importance of robust cybersecurity measures that go beyond protecting against data breaches and hacking attempts – they must also consider the potential harm caused by algorithmic content curation.