Support 100 years of independent journalism.

  1. Science & Tech
16 January 2019updated 26 Jul 2021 9:38am

Is YouTube’s Bird Box Challenge crackdown the beginning of a new era for viral content?

The Bird Box Challenge is part of a wider trend towards dangerous internet challenges. YouTube is, belatedly, taking steps to prevent them.

By Sarah Manavis

Last week, a teenager in Utah drove through a busy intersection with her hat over her eyes. She ran straight into another vehicle.

The girl had been inspired to do this by a meme. The “Bird Box Challenge”, named after Netflix’s new film Bird Box, gets people to blindfold themselves, like the characters in the movie, and then attempt to carry out various activities that typically rely on sight. The challenge was popularised by Netflix itself, which tweeted about it when only a handful of people had done it. Within a matter of hours, it was a viral trend, and YouTubers and social media stars as big as Jake Paul and Elliot Giles were conducting their own versions of the challenge: walking around shopping malls, standing on train tracks, and, yes, driving with blindfolds on.

Initially it seemed this was going to become 2019’s Tide-pod-eating trend, it seems the challenge may already have met its end. Yesterday, YouTube announced that it would not only be banning the Bird Box Challenge, but banning all pranks and challenges that it deems harmful or dangerous, as well as videos that cause severe emotional distress.

According to its updated guidelines, following a two month grace period to take in these new terms of use, channels will be removed from the platform after just three violations. If videos violate these new policies are posted in the meantime, the channels won’t be removed, but the videos will be.

Most of my writing on tech – and frankly, everyone’s writing on tech – is unrelentingly about how tech companies are knowingly permitting dangerous and harmful behaviour to continue untouched on their platforms. Social media companies see bad things happening online, try to ignore them, and make at best lukewarm efforts to address them. This is, seemingly, an unbending rule. 

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s global affairs newsletter, every Monday and Friday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy

Which is why I am incredibly hesitant in noting that this move from YouTube seems… good? It’s a rare example of platform swiftly, actively and holistically curbing some of the worst content posted on its site. The company has seen a problem, taken in that this is not a one-off, but a trend, and done something grand and sweeping to do its best to make sure this kind of thing can’t ever happen again. 

Of course, this comes with a massive caveat: it’ll take time to see how well YouTube sticks to its own guidelines. There are many examples of social media platforms (Twitter, Instagram) which have policies intended to block dangerous content, but, when users flag that content, tend to reply that it isn’t, in fact, harmful at all.

It also raises the question: What does YouTube see as “harmful”? Sure, eating Tide pods or driving on the motorway with a blindfold on are objectively dangerous activities. But what about conspiracy theories? Neo-Nazism? Racism and misogyny? The things that, while not actively showing a dangerous situation, can and do lead to violence?

Content from our partners
Stella Creasy: “Government’s job is to crowdsource, not crowd-control”
Executing business ideas is easier than ever, and it’s going to kill a lot of companies
Creating digitally enabled police forces across the UK

Despite these concerns, though, YouTube is already making headway on what its guidelines promise. Videos of the Bird Box Challenge are being removed almost instantly, and accounts that have violated the new guidelines are already being hit by restrictions for uploading this dangerous content. While it will likely take at least the two month grace period to discover whether YouTube is taking its new guidelines seriously, we can be cautiously optimistic that this crackdown is the beginning of a new era – and that, at the very least, the Bird Box Challenge on YouTube has died a timely, definitive death. 

Topics in this article: