Notice: All forms on this website are temporarily down for maintenance. You will not be able to complete a form to request information or a resource. We apologize for any inconvenience and will reactivate the forms as soon as possible.

Down the Rabbit Hole: How to Beat YouTube’s Algorithm

When I think of social media platforms and the algorithms they employ, YouTube isn’t the first one that comes to mind. Because it’s primarily used for watching videos, sometimes I forget that it is, in fact, a social media service. I can scroll through videos or even put it on autoplay without ever looking at another user’s comments. And that makes it a lot easier to believe that it’s somehow better or less problematic than other forms of social media.

However, a survey by Pew Research Center shows that 95% of teens (ages 13 to 17) have used YouTube, and 77% use it at least once a day. (19% report using it “almost constantly.”) So while users may not necessarily be obsessing over comments or needlessly applying filters to their pictures, something about YouTube is certainly drawing their attention.

Enter the algorithm.

Nowadays, YouTube’s algorithm focuses more on overall audience satisfaction than say, overall view time. It suggests videos based on how users with similar viewing patterns to your own reacted. It pays attention to whether you “like” or “dislike” a video. And it personalizes its recommendations based on channels to which you’ve already subscribed. No wonder it’s so addicting.

In 2018, YouTube’s algorithm came under fire when sociologist Zeynep Tufekci hypothesized that it was “unwittingly radicalizing some of its viewers through the videos that it automatically recommends that they watch next,” according to Conor Friedersdorf of The Atlantic.

Essentially, Tufekci observed that while watching rallies for a conservative politician, the autoplay videos suggested videos of “white supremacist rants, Holocaust denials and other disturbing content.” Similarly, when she switched directions and watched videos of liberal politicians, the autoplay videos were of a “leftish conspiratorial cast,” containing “arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11.”

And further examination found that pattern held across other topics:

“Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons. … It promotes, recommends, and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”

Tufekci’s conjectures about this “rabbit hole” sparked fear in many. And soon, we were hearing personal accounts of folks who had followed the path set by YouTube’s algorithm and eventually found themselves convinced by extremist rhetoric.

Kaitlyn Tiffany writes for The Atlantic that “a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.”

But that’s not where the story ended.

Although YouTube denied the existence of the rabbit hole, it also took action to reduce the promotion of “harmful misinformation” and “borderline content” in 2019. And it blocked shared-ad-venue programs for creators who disobeyed its policies about hate speech.

Seemingly, these measures have helped to curb (though not cure) the rabbit-hole effects. A study by Science Advances (“Social media, extremism, and radicalization”) found that in group of 1,181 people, only 6% wound up watching extremist political content over a period of several months. And it wasn’t all the work of the algorithm. “The majority had deliberately subscribed to at least one extremist channel,” Kaitlyn Tiffany writes. And many were finding the videos through external links, not from within YouTube. Study author Aaron Shaw noted,

“In other words, the social media platforms and society are far from helpless in the face of an upsurge of hateful and uncivil content but, instead, are increasingly well equipped to identify it and minimize its reach.”  

So what does that mean for parents who don’t want their teens to wind up like Alice in Wonderland?

Well, the study by Science Advances couldn’t account for radicalization that may have occurred before YouTube’s 2019 updates. So, as Tiffany writes, “there’s a chicken-and-egg dilemma: Which came first, the extremist who watches videos on YouTube, or the YouTuber who encounters extremist content there?”

That said, there is hope.

As the study itself states, we aren’t helpless. Even though problematic content still exists, the majority of users don’t engage with it. Kaitlyn Tiffany’s article notes that many previous studies of YouTube’s rabbit hole employed bots to “simulate the experience of navigating YouTube’s recommendations.” So instead of making thoughtful decisions, as a human being would, they were literally mindlessly clicking on the next suggested video, following the path down the rabbit hole.

So teach your teens how to recognize harmful content. If they bring an inciting video to you, don’t just shut them down but really try to understand how the video has made them feel. Encourage them to conduct research of their own if they hear conflicting information. And if you really want to be safe, show them how to block YouTube’s ability to suggest videos, too. Here’s how:

  • Head to the My Google Activity page.
  • Click on YouTube History.
  • When prompted, “Pause” your YouTube History.

As Alice discovered in her own curious adventures, it can be hard to crawl out of the rabbit hole once you’re in. But with a little foresight, you can keep from falling into it to begin with.

Emily Tsiao

Emily studied film and writing when she was in college. And when she isn’t being way too competitive while playing board games, she enjoys food, sleep, and geeking out with her husband indulging in their “nerdoms,” which is the collective fan cultures of everything they love, such as Star Wars, Star Trek, Stargate and Lord of the Rings.

2 Responses

  1. As a left wing user of YouTube, I find YouTube more regularly tries to push far-right propaganda to me in my feed regardless of my attempts to swing the algorithm the other way. This may be due to the fact that I (as a Canadian) have been tracking the coverage of right wing extremism in Canada. It seems (in my experience) that YouTube pushes far-right content to the viewer regardless of which political affiliation their viewing history swings. Extremely concerning if you ask me. Regardless of political affiliation, we all need to find common ground and chill. Rage baiting algorithms are doing nothing to help that, and furthermore contributing to the polarization of society and the erosion of democracy. Who benefits? The corporations receiving ad revenue from the amount of content people consume. Capitalists seem to be using the media as a tool of corporate profit at the expense of democracy, we should probably reign that in a bit.

  2. Is it the algorithm or is it that extreme videos tend to receive more attention?

    I usually use public Wi-Fi with in-private Firefox browsing and it happens to an extent in simply one browsing session. It seems to skew this way for everything, not just politics and it also tends to skew to monetized channels.