Flickr user: John Loo
Show Hide image

The disquieting rise of “search and shame”

The psychology behind – and consequences of – unearthing people’s old tweets. 

It’s happened twice in the last 24 hours – and three times in the last week. On Tuesday, grime artist Stormzy made headlines for using the words “faggot”, “fag”, and “proper gay” in tweets sent between 2011 and 2014. Also on Tuesday, YouTuber Jack Maynard left the reality TV programme I’m a Celebrity… Get Me Out Of Here! after The Sun unearthed tweets from 2011 to 2013  in which he’d used the words “n*ggas”, “retarded”, and “faggot”. Just over a week ago, Britain’s most prominent vlogger Zoella was criticised for tweets (sent between 2010 and 2012) in which she mocked “fat chavs”, “tramps” and gay men.

Each of these figures was exposed in the same way. Twitter users who search a person’s handle alongside an offensive word can instantaneously see whether that person has ever said it. “In a way it’s not too different from investigative journalism,” says Dr Aaron Balick, author of The Psychodynamics of Social Networking. Balick explains that a decade ago, the motivation to shame someone for their past would have to be combined with the effort of hiring a private detective or looking through their rubbish. "Technology and social media lower the bar for everything. They mean that somebody could’ve made a slip ten years ago that could be found by just about anybody with an internet connection.”

More often than not, these mini-investigations produce results. The fact many people have used social media since adolescence combined with the fact social consciousness has developed greatly since the Noughties means most people have posted one thing in the past that wouldn’t look good today (as a test, go to your Facebook Activity Log and search a taboo word or phrase).

I lied in my first sentence. Four people have actually been disgraced because of their old tweets in the last week. On the same day Zoella apologised for her offensive posts, the newly-appointed Gay Times editor Josh Rivers was suspended for tweets he sent between 2010 and 2015. In them, he expressed disdain for “chavs” and homeless people, and made multiple anti-semitic remarks. Similarly, Labour MP and women and equalities committee member Jared O’Mara was suspended last month for homophobic and misogynistic comments he had posted online between 2002 and 2004. Searching through people’s online histories has therefore played an important role in exposing prominent figures who are not suited for their professional roles.

Yet just because this phenomenon – which I will tentatively call the “search and shame” – has value, does not mean it can’t be troublesome. Balick explains that traditionally shame ensures people adhere to social conventions, but social media shaming is different. “Often the first consequence is to get the story out there for one’s own edification… If you know you’re going to get lots of likes and retweets, the fact that the person [being shamed] might find out and feel shame from that becomes secondary.” When someone “deserves” it, public shaming doesn’t seem too troublesome – but does someone who was offensive online 10 years ago deserve to be shamed now?

Arguably, this epidemic of headline-grabbing public shaming makes no room for context, sentiment, or personal growth. Often it does not distinguish between a man expressing clearly hate-filled opinions about an entire group of people (“The creepiest gay men are short, old asian men with long nails,” – Josh Rivers, 7 January 2011) and a boy using slurs to argue with one person online (“YOU RETARDED FAGGOT,” – Jack Maynard, 28 December 2012).

That is not to say that Jack Maynard wasn’t wrong. His use of multiple slurs (faggot, retard, and n*gga) is clearly disgusting, racist, shocking, and morally indefensible. Had he said those things today, he would undeniably be deserving of outcry and shame. But should he be punished for things he said when he was 16? As Stormzy said in his apology posted to (yep) Twitter: “I said some foul and offensive things whilst tweeting years ago at a time when I was young and proudly ignorant. Very hurtful and discriminative views that I’ve unlearned as I’ve grown up and become a man.” Is it right to shame people for things they did in their teens, ignoring any subsequent personal growth? Each of these stars has ostensibly changed (there is no evidence any of them have used these slurs in the last five years). 

Zoella was 21 when she called an X-Factor contestant “that fat chav”, should she have known better? Those who think age can exonerate foul tweeters probably don’t extend the excuse to 21-year-olds, even though scientists say the rational part of a teen’s brain isn’t fully developed until age 25. Yet placing an arbitrary age-limit on when it’s acceptable to be offensive (be it 16, 21, or 25) also leaves no room for context.

The “search and shame” phenomenon punishes individuals for our past collective evils. Zoella is punished for being fatphobic despite the fact she was writing at a time when the British media was consistently demonising people in programmes such as Supersize vs. Superskinny and Fat Families. Stormzy has now become a scapegoat for the fact that we, as a society, once used the word “gay” to mean “lame”. Disgusting examples of ableism, classism and blackface were still being aired by the BBC, in the form of Little Britain, at the time when Jack Maynard wrote his tweets. This doesn’t exonerate any of these individuals for their tweets – but it does question whether it is really that useful to punish them, as individuals, for the past wrongdoings of society as a whole.

The very progress which allows us to collectively see that the “comedy of contempt” rife in the Noughties was wrong is now being used to demonise those who aren’t savvy enough to delete their social media history. These individuals become scapegoats for society’s wrongs. This makes it even more dystopian and strange that everything we’ve ever posted can be recorded for all time, as the people who will be punished aren't the worst transgressors, but are simply the less digitally savvy. 

Many may feel that shaming is the least vile tweeters deserve. Yet bringing up age and context is not an attempt to excuse or justify this behaviour – which is clearly wrong and has often been damaging to many – it is an attempt to question whether the punishment fits the crime. Even without economic or professional repercussions, Balick explains that public shaming can still be incredibly damaging. 

“It’s a huge deal. Human beings are very, very sensitive to shame and humiliation,” he says. “It’s like one of our emotional trapdoors. You can be very highly psychologically evolved and somebody can send a withering tweet and it can have you down for days.”

This article isn't really about Stormzy and Zoella and whether they deserve to be punished for the things they did in the past. It is about today's children, who are growing up in a world where they could be held accountable for anything and everything they've ever said online. Balick say it is important to educate youngsters, but this is often difficult due to the way adolescent brains are wired. “If you're talking to a 12 or 13 year old, they've got another 10 years before [their brain has fully developed so that] they can withold an impulsive statement,” he says. He says the answer may lie in changing social networks themselves. 

“What we can't change is our basic psychological make-up, the will to shame and be shamed is always going to be there. But we've created social networks that really enable this in a pretty hardcore way, so developers might want to work with psychologists to see how they can develop their social networks to be more amenable to a psychological environment, which I think they're not.”

As for “search and shame”, it seems unlikely the phenomenon has hit its peak. It is the easiest and fastest way to take down a public figure in perhaps the whole of human history, and as such it will remain irresistible for many. 

Amelia Tait is a technology and digital culture writer at the New Statesman.

Hany Farid
Show Hide image

How to identify if an online video is fake

As “deep fakes” raise concerns, everyone needs to equip themselves with the knowledge to spot a fraudulent video.

In January 2018, a desktop application was released. While images have been manipulated online for years, “FakeApp” uses deep learning to allow anyone to create realistic face-swap videos – often for sexually explicit purposes. Headlines were made when celebrities were superimposed over porn stars, while many on social media laughed at safe-for-work edits of Nicholas Cage’s face onto other actors’ bodies.

This month, Reddit and PornHub banned these videos – known as “deep fakes” – and when Motherboard first broke the story, writer Samantha Cole was clear about the potential consequences of the tech.

“It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass,” she wrote. But Hany Farid, a digital forensics and image analysis expert from the Dartmouth College, warns there could be even greater consequences.

“What I’m particularly concerned about is the ability to create fake content of world leaders,” he tells me, giving the example of Donald Trump declaring nuclear war. “You can imagine that creating an international crisis very quickly."

Nicholas Cage as Lois Lane in a Superman movie 

Despite the bans, people aren’t going to stop using this technology, which means every internet user has a responsibility to get smarter about what they share. Right now, deep fakes themselves aren’t too hard to identify. A combination of common sense (did noted-feminist and multi-millionaire Emma Watson really film a video of herself in the shower for Reddit?) and looking for tell-tale signs (does the video flicker? Is it a very short clip? Does something seem off?) will suffice.

Yet as the technology improves and other technology (such as the artificial intelligence used by researchers at the University of Washington to create a fake speech by Barack Obama) becomes more accessible – everyone will need to be equipped to spot fake videos.

“Ten, 20 years from now, ideally we will have software online where you can upload a video and it can tell you if it’s real or not, but we’re not there, and we’re not even close,” says Farid, who warns that ordinary people shouldn’t attempt to do complicated forensic analysis of videos and pictures. “A low-tech solution can solve a lot of the problems: common sense.”

When Farid wants to verify a picture or a video, he has many tools – including his own knowledge – at his disposal. When he wanted to see if a viral video of an eagle snatching a baby was real, he took a single frame and analysed the shadows. He found them inconsistent with the sun in the video. When he wants to see if a person in a video is CGI or real, he uses software that identifies subtle colour changes in people’s faces.

Farid's analysis of shadows in the eagle video

“When you’re on camera and your heart is beating, blood comes in and out of your face,” he explains. “Nobody notices this, you can’t see it – but the colour of your face changes ever so slightly, it goes from slightly redder to slightly greener.” So far, this same technique appears to work for faceswapped deep fake videos, and digital forensic experts can use the colour changes to determine a person’s pulse, which in turn determines if the video is real.

“Now I’ve told you this, guess what’s going to happen?” says Farid. “Some asshole Redditor is going to put the goddamn pulse in.”

Farid's analysis of a real person vs. CGI person's pulse, based on the red and green in their faces

Farid calls this an “information war” and an “arms race”. As video-faking technology becomes both better and more accessible, photo forensics experts like him have to work harder. This is why he doesn’t advise you go around trying to determine the amount of green or red in someone’s face when you think you spot a fake video online.

“You can’t do the forensic analysis. We shouldn’t ask people to do that, that’s absurd,” he says. He warns that the consequences of this could be people claiming real videos of politicians behaving badly are actually faked. His advice is simple: think more.

“Before you hit the like, share, retweet etcetera, just think for a minute. Where did this come from? Do I know what the source is? Does this actually make sense?

“Forget about the forensics, just think.”

Kim LaCapria works for one of the internet’s oldest fact-checking web resources, Snopes. Deep fakes aren’t her biggest concern, as lower-tech fakes have dominated the internet for decades.

Most successful video fakery in recent years involved conditions that allowed for misrepresentation, she says. Examples include videos in a foreign language which aren’t subtitled but are instead captioned misleadingly on social media (such as this video of a Chinese man creating wax cabbages, which Facebook users claimed were being shipped to America and sold as real veg) and old videos repurposed to fit a current agenda.

In the past, low-tech editors have successfully spread lies about world leaders. In 2014, a video appeared to show Barack Obama saying “ordinary men and women are too small minded to govern their own affairs” and must “surrender their rights to an all-powerful sovereign”. It gained nearly one and a half million views on YouTube.

The faked Obama video

“This video goes back perhaps four years or more, and wasn’t exceptionally technologically advanced,” says LaCapria. “It involved the careful splicing of audio portions of a speech given by President Obama abroad. It spread successfully not just due to its smooth edits, but also because the speech was one not many Americans witnessed.”

Concern about the political ramifications of deep fakes may be overhyped when we already live in a world where a faked Trump tweet about “Dow Joans” can be shared by over 28,000 people, and a video of an academic talking about “highly intelligent people who gravitate to the alt-right” can be taken out of context to publicly shame him. Yet LaCapria hopes that deep fakes will now inspire people to be hypervigilant online.

“Readers should really be cautious to find the source of the video and its context before sharing it,” she advises. When it comes to speeches by world leaders, you can Google for a transcript or simply search to see when a statement was first said. For example, a participant on Australian TV show The ABC claimed London Mayor Sadiq Khan said terror attacks are “part and parcel of living in a big city” after the 2017 Westminster attack. In fact, he actually said a version of this in 2016.

“People should definitely be wary if they can’t themselves authenticate a translation, and as always, they should be aware if it’s a ‘too interesting to be true’ situation,” says LaCapria. “If a famous person is depicted doing, saying, or engaging in newsworthy activity, it’s likely that would be something entertainment and news websites would report if it were even remotely legitimate.”

Sometimes, things that are too-good-to-be-true are true. This month, a video of Trump’s hair blowing in the wind exposed the president’s strange scalp. Snopes were able to verify it by comparing the popular video on social media with footage from other sources – such as Fox News, The Guardian, and Getty Images. This is arguably something everyone can and should do. When a video confirms your political biases, just double check.

Trump's hair blowing in the wind

Deep fakes, then, are just the latest iteration of a problem as old as the internet. While its worrying that FakeApp has democratised the process of creating fake videos, problems still remain with how we receive and scrutinise information online.

“We have to start afresh,” says Farid, arguing people need to be taught how to properly handle online information. He himself has been asked to run public talks where he explains this.

“Do I really have to tell people to use common sense?” he muses, with a hint of frustration. “Apparently, here we are – so OK, fine. I’ll do that.”

More like this: How to identify fake images online

Amelia Tait is a technology and digital culture writer at the New Statesman.