Getty
Show Hide image

Is there any truth in the rumours of a YouTube “paedophile ring”?

Talk among high-profile YouTube users of paedophilic activity on the video-sharing site began spreading late last week.

“Hi, internet friends. There is a paedophile ring on YouTube.”

So starts a five-minute-long video by YouTuber ReallyGraceful, a romance author who makes tri-weekly videos “diving down the rabbit hole of truth”. ReallyGraceful created her video after a Reddit post claimed that child pornography could be found on YouTube if a user searched the words “Webcam video from”.

Since then, many big name YouTubers have followed suit in trying to expose an alleged paedophile ring on the site. Last week, Pyrocynical, a British YouTuber with over one and a half million subscribers, created a video called “Child Exploitation on YouTube”, discussing “Webcam video from” videos of children twerking or filmed from sexually suggestive angles, many of which had accumulated millions of views and hundreds of predatory comments. He notes that none of the videos contained actual nudity.

“This is essentially softcore child porn,” he says in the video, before later adding: “YouTube has the ability to crack down on this shit but they choose not to.”

If you search “Webcam video from” on YouTube today, no such videos will be found. YouTube relies on a system of users and “trusted flaggers” to highlight videos that violate its policies, and it appears that, after the videos were exposed by top YouTubers, the content has been removed.

“YouTube has a zero-tolerance policy for sexual content involving minors,” a YouTube spokesperson says. “Engaging in any type of activity that sexualises minors – including leaving inappropriate comments – will immediately result in an account termination. We encourage users to flag videos or comments for our review.”

Although it is apparent that some sexually suggestive content was hosted on YouTube, and that predators also aggregated innocent videos of children, is there any truth to rumours that “Webcam video from” is a secret code for paedophiles, and that a ring – which is to say, a group of people acting together to find, upload, and share the content – is operating on the site?

YouTube’s “webcam capture” feature was discontinued at the beginning of 2016, but it previously allowed users to upload content directly from their webcams which would then be titled “Webcam video from” followed by the date and time. Most of these videos were innocent, though it is apparent from comments posted on such videos – with the “Webcam video from” title – that predators used the search term to find content of children. Accusations that paedophiles downloaded, reuploaded and monetised these videos are hard to prove or disprove, though it is possible, considering how long such videos were left up. Most of these videos were – before they were removed – a few years old, and the trend seems to be an obsolete one that was only discovered recently.

Comments from “Webcam video from” videos, via Imgur

This wouldn't be the first time that paedophilic activity has been discovered on YouTube. Last April, a spate of “mummy vloggers” stopped filming their children after discovering that their videos were embedded into paedophilic playlists on the video-sharing site.

“Before the internet, someone with a sexual interest in children had to take lots of risks,” Karl Hopwood, a member of the UK Council for Child Internet Safety, told me at the time. “They needed to loiter near schools, go to the beach or park. Now, they can browse huge amounts of content from the privacy of their own homes, and no one knows they have done it.”

It is clear, then, that predatory users can abuse YouTube to find, aggregate, and share content of children, but the term “paedophile ring” muddies the story slightly. The phrase implies some sort of organisation or central power, and ReallyGraceful connected it to “Pizzagate”, the conspiracy theory, favoured by some Donald Trump supporters, that a pizza shop in Washington DC is a front for a Democratic paedophile ring visited by Hillary Clinton.

“You can say all day that this has nothing to do with Pizzgate but clearly it has everything to do with Pizzagate because there is a paedophile ring out in the freaking open on YouTube,” she said in her video.  

ReallyGraceful also uses her channel to spread stories about “#TwitterGate”, an alleged paedophile ring on Twitter. “The story that broke this morning involves the very platform that was trying to supress Pizzagate,” she says in her video on the topic. In her video about YouTube’s “paedophile ring” she says: “The second one of us uploads a Pizzagate video to YouTube, we get flagged for some ridiculous reason.” ReallyGraceful voted for Trump and has previously created videos questioning Barack Obama's birth certificate

The rhetoric of “paedophile rings” has been seized as a political tool by some US right-wingers to argue for their cause, as well as attack their enemies and generate hysteria about the need to “drain the swamp”. This new-found trend of “exposing” paedophile rings and using this exposure to bolster one’s own political beliefs can obscure legitimate concerns about children’s online safety. While predators may in the past have used YouTube to prey on children, the sensationalism of a handful of professional YouTubers in telling the story has obscured a real and important issue.

If you identify troubling content on YouTube, click the flag underneath the video or the three dots next to the comment in question. A staff of specialists monitor all reports 24/7 and will take action to remove any offending content. If you are concerned about a child’s online safety, you can find advice or make a report to the Child Exploitation and Online Protection Centre at: ceop.police.uk.

Amelia Tait is a technology and digital culture writer at the New Statesman.

Getty
Show Hide image

Forget “digital detoxes”. Spring clean your online life instead

Step one: remove the app on your phone which takes up the most time. 

In 2006, news broke that broke me. The British Heart Foundation unveiled a poster of a blonde girl guzzling a gallon of cooking oil. “What goes into crisps goes into you,” it read, as the charity declared that eating one packet of crisps a day equated to drinking five litres of oil a year.

I gave up crisps that Lent (an admirable act that was somewhat mitigated by devouring a six-pack of McCoy’s on Easter Sunday). Still, despite my continuing pack-a-day habit, the BHF’s statistic has never left me: 365 packets of salt and vinegar crisps are equal to five bottles of Filippo Berio. But other bad habits are harder to comprehend. Last week, I “liked” 36 things on Facebook, wrote ten tweets, and posted five Instagram pictures (two of which were selfies). What effect, if any, has this had on my mental and physical health? How much metaphorical cooking oil am I pouring into my body?

“You really don’t need to worry about the volume of your own social media interactions, based on the average digital user,” the founder of the digital detox specialists Time To Log Off, Tanya Goodin, told me. Goodin says that we “tap, click and swipe” our devices over 2,617 times a day and that the average person will post 25,000 selfies in their life.

Though these statistics seem shocking, what do they mean? What does swiping thousands of times a day do to our minds – or, for that matter, our thumbs? The experts are divided. In 2015, national newspapers spread stories suggesting that using an iPad would damage a toddler’s brain but the research didn’t mention the term “brain damage” once. In fact, as the Guardian pointed out in its debunking, studies produce mixed results: some say iPads help improve child literacy, others say they are distracting.

The studies about adults’ screentime are similarly hard to decipher. Heavy Facebook usage has been linked to depression but there isn’t any apparent cause and effect. Do depressed people use Facebook more, or does Facebook make us depressed? “Internet addiction disorder” (IAD) was a term originally coined as a hoax, but many now see it as a real and treatable problem. Yet it does not feature in the Diagnostic and Statistical Manual of Mental Disorders, and experts still struggle to set diagnostic criteria for it. How much internet is too much?

These academic ambiguities haven’t stopped the idea of the “digital detox” taking off. Detoxers refrain from using any electronics for a period of time in the hope that this will improve their mental health and real-world relationships. At the time of writing, if you search for “digital detox” on Instagram, you’ll find 25,945 people talking about their personal attempts. There are pictures of bike rides, sunsets and children playing, each posted – apparently without irony – to extol the virtues of getting off social media and turning off your phone.

Digital detoxing is also big business. Goodin runs workshops, retreats and camps where no electronics are allowed and the daily schedule consists of yoga, walking, swimming and drinking smoothies. The next one, in Italy, costs from £870 per head for a week. A multitude of such camps exist, as well as books, websites and guides on how to detox by yourself. To connect, man, you have to disconnect, you know?

All of this has made me a digital detoxing cynic. I don’t believe I need to switch off my phone to “live” better, because I believe my phone itself contains life. On Reddit, I can speak to strangers living hundreds of thousands of miles away about their lives. On Twitter, I can keep up to date – in real time – with news and events. If I want to learn yoga or make a smoothie, where will I go to find my local gym or the correct strawberry-to-spinach ratio? Technology can even inspire us to “get out more”. Last summer, the gaming app Pokémon Go spurred people to walk 2,000 more steps a day, and I’m willing to bet that brunch sales figures have skyrocketed since the invention of Instagram.

Digital detoxing relies on the vague idea that tech is somehow toxic. Even without scientific studies to back this up, most of us know from our own, anecdotal evidence how spending too much time on our phones can make us feel. We get down if our latest status doesn’t have enough likes, or our eyes hurt after the sixth “EXTREME PIMPLE POPPING” YouTube video in a row. So, at core, digital detoxing isn’t “wrong”: it is merely misguided. Instead of trying to cut out all technology for a week, we should be curbing our existing habits; rather than a digital detox, we should have a digital spring clean.

Delete – or hide – anyone on your Facebook friends list that you wouldn’t talk to in real life. Remove your work email from your phone (or ask your boss for a separate work phone if you absolutely need access). Delete the app that takes up most of your time – be it Facebook, Twitter or YouTube – so that you are forced to get to it manually, through your browser, and therefore become instantly more aware of how many times a day you open it up. Tanya Goodin also advises me to use my phone less at night. Essentially: go mild turkey. If this is too much and you believe you are addicted to your smartphone or laptop, then, of course, you should seek help (speak to your doctor or call the Samaritans on 116 123).

But most of us just need to get smarter about our internet use. Even if scientists proved that technology was damaging our brains, a week-long detox wouldn’t be the cure. Rather, we should focus on our bad personal habits and try to curb them. Do you get into too many arguments online? Do you ignore your partner because you’re staring at a screen? Do you post opinions you regret because you don’t think them through first? These behaviours are problematic – the internet itself isn’t. To control our lives, we shouldn’t switch off: we should get more switched on.

Amelia Tait is a technology and digital culture writer at the New Statesman.

This article first appeared in the 06 April 2017 issue of the New Statesman, Spring Double Issue

0800 7318496