Have you ever stopped to think about how YouTube, Facebook, or really, every social media platform you use, always seems to know exactly what to show you? I mean, if you’re on YouTube watching “Cats doing crazy things” videos, or enjoying a lecture about the Constitution, the minute you’re done, Bingo, you’ve got more crazy pet vids and insightful commentaries to choose from. In fact, if anyone has done a video about James Madison and his cat, that would probably be ready for you to watch, too.
It’s akin to magic.
But it’s all built into the platform’s programing. It’s an algorithm. Each platform has a similar algorithmic set-up that watches every move you make and quickly computes what you like and what to offer you. They’re all slightly different. They each rank certain elements as having higher or lower priorities.
So, Facebook (Meta) will put relationships and other Facebook connections at the top of the priority heap and feed you your uncle Larry’s cat videos first. Twitter is more concerned with user interactions on your favorite topics or the people you follow most. Etc.
Anyway, it’s an amazing set of formulas all designed to keep you coming back for more or to simply keep your backside glued right where it is as the hours slip by like seconds.
However, it’s beginning to become clear that some social media algorithms are much more problematic than others. TikTok, for instance, an app that sports more than 1 billion active users monthly, might be far less cool than its ever-growing pool of users think.
A new study from the Center for Countering Digital Hate (CCDH) found that within as few as 2.6 minutes, TikTok’s algorithm can push suicidal content at kids. The report also pointed to the fact that eating disorder content was recommended to teens within “as few as 8 minutes.”
How did they come up with that? Well, the CCDH hired researchers to set up TikTok accounts posing as 13-year-old users (the minimum age for an account). Those “users” were then interested in content about body image and mental health. In fact, to run the algorithm through its paces, each researcher set up two accounts as a 13-year-old—one as an average or “standard” teen and one that could be considered as a “vulnerable” teen. One account was given a girl’s username. The other, a username that indicated a certain concern about body image and included the phrase “loseweight.”
After that, all of the account users paused briefly on videos about body image and mental health, and “liked” them. The TikTok algorithm then quickly sent potentially harmful videos to all the users. But the researchers found that those “loseweight” accounts were served three times more overall harmful content, and 12 times more self-harm and suicide-specific videos, than the standard accounts.
Let me be clear, the pushed videos weren’t simply anti-suicide vids, but videos that promoted the idea, such as one video labeled “Making everyone think your [sic] fine so that you can attempt in private”. And again, that kind of info was doled out in just a few minutes after the user signed onto the app.
“TikTok is able to recognize user vulnerability and seeks to exploit it,” said Imran Ahmed, the CEO of CCDH, in a CBS news report. “It’s part of what makes TikTok’s algorithms so insidious; the app is constantly testing the psychology of our children and adapting to keep them online.”
CCDH’s study went even further than the 2.6 minute number I quoted above, stating that their findings suggested that TikTok pushes “potentially harmful content to users every 39 seconds.”
Let that sink in for a moment. That’s the kind of statistic that should give every user and teen’s guardian a bit of pause. And if it doesn’t, then maybe it’s time for us all to check our own internal algorithms.