Newstatesman via Twitter
Show Hide image

Want to sell a bad book? Tap into Twitter's network of "influencers"

How did a 19-year-old self-published author's debut novel become a viral sensation?

Ashton is undeniably handsome. His striking blue eyes stare at you from under bushy, dark eyebrows in his Twitter profile picture, and he has finished off the whole look by cheekily biting his bottom lip. He is a teenage heartthrob, followed mostly by young girls on the social network, but like most teenage boys he only has one thing on his mind.

All Ashton cares about is promoting the digital sales of the young adult romance novel Just Friends by Billy Taylor.

“GIRLS U NEED TO READ THIS,” wrote Ashton on 25 Jan, followed by a crying and heart emoji. Two days earlier, on 23 Jan, he said the same. The thought also appeared to be on his mind on 22, 20, 18, and 16 Jan. For months, Ashton has been posting the same tweet – and getting nearly a thousand retweets each time – about a 19-year-old author from Sheffield’s self-published book.

Via @ashtontayz

You don’t need to have been on the internet for a long time to realise that Ashton isn’t real. His profile was designed by marketers to promote various products, and the tweets are then retweeted by prominent Twitter accounts in order to appear popular. “It makes the tweet looks more natural and from an 'actual' person,” explains Jason Wong, a 19-year-old internet entrepreneur, who used this account and these methods – known as “influencer marketing” – to promote his book The Holy Méme Bible. Despite being fundamentally fake however, Ashton’s tweets – and those of similar accounts (here, here, and here) – reach millions of real people.

***

“I bought ‘Just Friends’ based on an excerpt of the book that gained a lot of attention within the Twitter community,” Sofia Aguilar, a 17-year-old student from the United States, tells me. “Instead of reading the beautiful story that I had been promised, I instead read a book that was poorly written, unedited, and lacking in any complexity in the character, plot, and dialogue aspects.

“The excerpt that had first attracted me to ‘Just Friends’ may have been the only gem of the story, and as such, I felt very cheated out of my money and my time.”

Aguilar isn’t alone. The most recent reviews of the book on Goodreads paint a picture of hundreds of misled teen girls. “I was expecting a bit more from this book. It's a pretty big deal on Twitter,” writes Catherine, who rated it two stars; “Very disappointed with this book. After noticing a lot of people talk about this book on social media I decided to purchase it for my summer holiday,” says Amy (one star); Sage, who gave the book two stars, writes: “I am writing this because I had been blasted with advertisements regarding this book and I thought alright fine let me give it a go… I had high hopes but there were so many mistakes in grammar … just warning you as someone who never writes reviews and has read hella books - it is not worth the money.”

***

It is not just the book that is extraordinarily popular on Twitter. Pictures of the author Billy Taylor holding the book and his puppy garner thousands of shares on the site. While the influencer accounts create an artificial reaction, there are many real people who go on to share the posts once they appear on their timeline. Taylor undoubtedly has thousands of real fans (the book was once featured on the Apple iBooks homepage and an audit of his account proves that almost none of his followers are fake), but that doesn’t change the fact that many teenagers feel duped.

I reach out to Taylor multiple times over a few weeks, and when he finally replied he said he was not interested in speaking with me. It is not apparent whether he himself organised for his book to be promoted by influencer accounts (such as @BeFitWorld, which as 308,000 followers, @DeepxSnaps which has 302,000, or @TheLifeDiaries which as 3.54 million) or whether someone else was at work. Even if he did, it is also not apparent whether he paid for this service. Many of these accounts have since unretweeted their shares of these promotional posts, but the fact they are mentioned in the replies reveals that they initially shared a post. Wong, who used many similar accounts to promote his book, sheds light on how it works.

“Influencer marketing is essentially making something appear more popular than what it already is,” he says. “[Taylor] probably won't want to give out too much info due to the nature of the business.” Wong explains that he reached out to influencers one by one via his own marketing firm and offered them “compensation” to share his tweets. He would compose a tweet, send it to influencers, and pay them based on the results.

“I framed all the posts in a way that seemed organic and easy to share. That way, the tweets can reach a broader audience without paying for it.”

Wong had great success with these methods and claims that he earned $200,000 (£160,600) in three weeks selling his "meme bible". There seem to be no complaints about the book online. But when people end up disappointed, as with Taylor’s book, this type of marketing does raise ethical questions. There is also another, more pressing question. Is all of this legal?

***

Perhaps the most well-known social media marketing company is Social Chain. It had nothing to do with Taylor’s book, but manages hundreds of popular Twitter accounts which it uses to force things to “Trend” on the site. According to Buzzfeed, the company charges brands up to £280,000 to advertise via its influencers.

“With a vast network of social communities and a reach of around 305 million, we work with huge global brands such as Apple Music and Disney to deliver creative campaigns and can get any topic trending in under 17 minutes,” Michael Martin, head of the Social Chain-owned Twitter account Student Problems, tells the New Statesman.

Since 2014, the Advertising Standards Authority (ASA) have cracked down on internet influencers when they ruled that YouTubers had to be “up front and clear” when advertising a product. In practical terms, this means hashtags like #ad (advertisement) and #spon (sponsored) are now used across social media. A spokesperson for the ASA tells me: “If a social media influencer is paid to promote a product or service and the advertiser has control over the message then it should be clearly labelled as an ad.”

The use of “#ad” can therefore stop teenagers being misled by influencer marketing. Things get more complicated, however, if a brand or individual isn’t paying influencers to write a tweet, but simply to retweet one.

“The act of paying someone to retweet but having no control over the message means that it’s unlikely to be classed as advertising under our rules,” says the ASA spokesperson, “[But] under consumer protection legislation and a requirement of the Competition and Markets Authority (CMA) who have undertaken work in this area, it should still probably be labelled as being a paid promotion.”

The ASA have worked with Social Chain to help them understand and enforce these rules, and the CMA ruled in August that it – and other similar networks – must clearly identify its paid-for advertisements.

“It is our view that everyone involved in online endorsements is responsible for ensuring that paid promotions are clearly labelled or identified,” the CMA wrote in their “open letter to marketing departments” last year. “This content is read by consumers, who may rely on the information to inform their purchasing decisions. If it is not correctly labelled or identified, consumers may be misled into thinking it represents the author’s genuine opinion when a business has in fact paid to influence the content.

“Misleading readers or viewers falls foul of consumer protection law and could result in enforcement by either the CMA or Trading Standards Services, which may lead to civil and/or criminal action.”

***

It is not apparent whether Billy Taylor was involved in the promotion of his book via social influencer networks, or whether any laws were breached in these promotions, which may not have been paid for. It is clear, however, that Twitter influencers have misled teens across the social network by making the book seem artificially popular. A recent study by Stanford University revealed that 82 percent of students couldn't distinguish between a sponsored post and an actual news article. It seems vastly unfair that when teenagers log into their social media accounts, they have to navigate an online world where they will be tricked out of what little money they have. 

Amelia Tait is a technology and digital culture writer at the New Statesman.

Flickr: B.S.Wise/YouTube
Show Hide image

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content - or an evil, tyrannical censor for restricting access to LGBT videos?

YouTube is having a bad week. The Google-owned video-sharing platform has hit the headlines twice over complaints that it 1) is not censoring things enough, and 2) is censoring things too much.

On the one hand, big brands including Marks & Spencer, HSBC, and RBS have suspended their advertisements from the site after a Times investigation found ads from leading companies – and even the UK government – were shown alongside extremist videos. On the other, YouTubers are tweeting #YouTubeIsOverParty after it emerged that YouTube’s “restricted mode” (an opt-in setting that filters out “potentially objectionable content”) removes content with LGBT themes.

This isn’t the first time we’ve seen a social media giant be criticised for being a lax, morally irresponsible slacker and an evil, tyrannical censor and in the same week. Last month, Facebook were criticised for both failing to remove a group called “hot xxxx schoolgirls” and for removing a nude oil painting by an acclaimed artist.

That is not to say these things are equivalent. Quite obviously child abuse imagery is more troubling than a nude oil painting, and videos entitled “Jewish People Admit Organising White Genocide” are endlessly more problematic than those called “GAY flag and me petting my cat” (a highly important piece of content). I am not trying to claim that ~everything is relative~ and ~everyone deserves a voice~. Content that breaks the law must be removed and LGBT content must not. Yet these conflicting stories highlight the same underlying problem: it is a very bad idea to trust a large multibillion pound company to be the arbiter of what is or isn’t acceptable.

This isn’t because YouTube have some strange agenda where it can’t get enough of extremists and hate the LGBT community. In reality, the company’s “restricted mode” also affects Paul Joseph Watson, a controversial YouTuber whose pro-Trump conspiracy theory content includes videos titled “Islam is NOT a Religion of Peace” and “A Vote For Hillary is a Vote For World War 3”, as well as an interview entitled “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”. The issue is that if YouTube did have this agenda, it would have complete control over what it wanted the world to see – and not only are we are willingly handing them this power, we are begging them to use it.

Moral panics are the most common justification for extreme censorship and surveillance methods. “Catching terrorists” and “stopping child abusers” are two of the greatest arguments for the dystopian surveillance measures in Theresa May’s Investigatory Powers Act and Digital Economy Bill. Yet in reality, last month the FBI let a child pornographer go free because they didn’t want to tell a court the surveillance methods they used to catch him. This begs the question: what is the surveillance really for? The same is true of censorship. When we insist that YouTube stop this and that, we are asking it to take complete control – why do we trust that this will reflect our own moral sensibilities? Why do we think it won't use this for its own benefit?

Obviously extremist content needs to be removed from YouTube, but why should YouTube be the one to do it? If a book publisher released A Very Racist Book For Racists, we wouldn’t trust them to pull it off the shelves themselves. We have laws (such as the Racial and Religious Hatred Act) that ban hate speech, and we have law enforcement bodies to impose them. On the whole, we don’t trust giant commercial companies to rule over what it is and isn’t acceptable to say, because oh, hello, yes, dystopia.

In the past, public speech was made up of hundreds of book publishers, TV stations, film-makers, and pamphleteers, and no one person or company had the power to censor everything. A book that didn’t fly at one publisher could go to another, and a documentary that the BBC didn’t like could find a home on Channel 4. Why are we happy for essentially two companies – Facebook and Google – to take this power? Why are we demanding that they use it? Why are we giving them justification to use it more, and more, and more?

In response to last week’s criticism about extremist videos on the YouTube, Google UK managing director Ronan Harris said that in 2016 Google removed nearly 2 billion ads, banned over 100,000 publishers, and prevented ads from showing on over 300 million YouTube videos. We are supposed to consider this a good thing. Why? We don't know what these adverts were for. We don't know if they were actually offensive. We don't know why they were banned. 

As it happens, YouTube has responded well to the criticism. In a statement yesterday, Google's EMEA President, Matt Brittin, apologised to advertisers and promised improvements, and in a blog this morning, Google said it is already "ramping up changes". A YouTube spokesperson also tweeted that the platform is "looking into" concerns about LGBT content being restricted. But people want more. The Guardian reported that Brittin declined three times to answer whether Google would go beyond allowing users to flag offensive material. Setting aside Brexit, wouldn't you rather it was up to us as a collective to flag offensive content and come together to make these decisions? Why is it preferable that one company takes a job that was previously trusted to the government? 

Editor’s Note, 22 March: This article has been updated to clarify Paul Joseph Watson’s YouTube content.

Amelia Tait is a technology and digital culture writer at the New Statesman.