An Apple iPad with Twitter's native app. Photo: Peter Macdiarmid/Getty Images
Show Hide image

Twitter's taking away your control over what tweets you choose to see

A subtle change in how Twitter's feed works will make some people very angry, but most people probably won't even notice.

Twitter users will this week notice a strange new thing happening to their timelines - it's not theirs any more. Tweets from people they don't follow, and who the people they follow haven't chosen to retweet, are now appearing in timelines under the guise of being "relevant and interesting".

Here's how Twitter is now describing itself, on its "what's a Twitter timeline?" about page:

When you sign in to Twitter, you'll land on your home timeline.

  • Your home timeline displays a stream of Tweets from accounts you have chosen to follow on Twitter.
  • Additionally, when we identify a Tweet, an account to follow, or other content that's popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don't follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

It doesn't take long to find how much users are hating this change:

It's pretty obvious why this is so annoying - favourites function in a very different way to retweets. Here's a dumb Buzzfeed list of "17 Types of Twitter Fave" - ignore some of the sillier ones like the accidental self-fave, and it's still clear there's a lot a favourite can mean. I use it for bookmarking stuff for later that I then might want to retweet if I think it's worth it, but there are plenty of other times users won't want a favourite to be automatically pumped into their followers' timelines. They might fave a job advert, for example, or a tweet critical of someone they know to remind themselves of it even if they disagree. Now, a pseudo-private clearing house for public activity is itself also public.

However, the reason for the change is simple: Twitter will make more money if it gets more people tweeting, and people are more likely to tweet if they see stuff they can tweet about.

At the moment there's a clear difference between the types of service that Twitter and Facebook offer: the former's is comprehensive, while the latter's is curatorial. Facebook's news feed did, once upon a time, list nothing more than the activity of a user's friends - that's things like wall posts, shared links, adding new friends, that sort of thing - but very quickly began using algorithmic guesses to insert extra stuff that it thought was relevant. The news feed these days is less like a place to get updates from friends, and more a streak of vomit - you know that there are probably some quite nice things that were ingested in the beginning, but the recommendations that came back up range were not particularly welcome.

A good illustration of this is Mat Honan's Wired piece where he liked everything he saw on Facebook for two days. The result was that it not only quickly become completely unusable, pumping out links to far-right political sites and clickbait listicles that crowded out any of his friends' activities, but he also ruined Facebook for everyone who was friends with him - the algorithms, after all, assume that word-of-mouth is the best recommendation engine that exists, and so treats the things your friends like as things you will likely also like. Your ability to control what Facebook shows you is negligible.

Twitter, by contrast, has kept this kind of manipulation to a minimum. Following someone on Twitter means that every tweet they make will appear in the timeline, as it happens. The people who really love Twitter tend to also dislike Facebook for this reason. Trying to follow world events in real-time is easier with a platform that treats every voice the same, and which doesn't let the actions of one user influence the timeline of another.

Except, of course, it does. The retweet function - where a user can republish a tweet for all of their followers as if they themselves follow the retweeted account - does allow some cross-contamination, and was introduced in 2009 as a more "natural" version of the manual method which had organically emerged when Twitter first launched. People hated it at first, too, for allowing "strangers in my stream". Then there are promoted tweets, too - anyone can pay to have their tweet show up in the timelines of strangers. People hated them too (they still hate them), but, since it was obvious Twitter would have to find a way to make money to support its free service, these tweets have become seen as a necessary evil.

There's a problem that every social network has to struggle with, and Twitter is no exception: how much to poke users into doing stuff they otherwise might not. Most people who use Twitter - we're talking millions of users - sign up, follow a few friends and relatives and a couple of celebrities, and then don't particularly get involved any more than that. This is the effect of respecting the user's ability to curate their own timeline. They act like bubbles, floating in isolation past each other while never mixing.

That's not good enough for a business like Twitter, which has been struggling to match the growth in users and revenues that it predicted in its IPO in November last year. The six months from December to July saw its stock fall in value by 47 per cent, when it then rebounded after an encouraging uptick in user growth and a reduction in losses. In large part this new confidence from investors is based on the idea that somehow, in the future, Twitter will crack a way of making money - just as Facebook has. That's why Twitter keeps experimenting, from making it easier to embed tweets in other websites to introducing all kinds of themed content for big events like the World Cup (remember the flags?). 

And, fundamentally, that's why it makes business sense to turn the favourite function on Twitter into a kind of "I'm Feeling Lucky" retweet, or to let users see popular tweets from the people who the people they follow follow. It needs to keep its investors happy by converting those millions of registered users into active users, defined as those who log on at least once a month. Currently growth in that number is around six per cent, which isn't fast enough. More promising, instead, is to figure out how to convince the non-active users to become more "engaged". 85 per cent of those who stop using Twitter claim it's because they had less than 30 followers, and 76 per cent of people say that they found Twitter's lack of filtering and sorting functions offputting. Those are the kinds of figures that demand changes to a platform's functionality.

Those users who use third-party apps or clients like Tweetdeck or Tweetbot to check Twitter won't see this change - and it's notable that promoted tweets don't appear in those apps either. (There's no word yet from Twitter if the new favourite/retweet hybrid will appear in every iteration of the timeline, or if power users will be able to opt out indefinitely this way.) It's tempting, then, to dismiss the most vociferous critics of the change as those who are merely annoyed by any change at all - it worked just fine before, after all - and that's not an unfair criticism. It's not a policy change that is arguably more symbolically important for taking away some user choice than it is functionally.

However, Twitter's growing pains aren't limited just to its timelines. The question of what content should be permissible in tweets has always been an issue, and it is becoming increasingly more worrisome as it becomes clear that online harrassment and bullying are depressingly suited to the medium. Twitter's introduction of auto-previewed images to timelines was roundly-criticised for making shocking and disturbing images harder to avoid, and the process for reporting abusive behaviour is notoriously long-winded and complex - much more so than for spammers. In the light of today's news that an American photojournalist, James Foley, has been murdered by Isis militants, the Twitter CEO Dick Costolo tweeted that any accounts actively sharing videos or pictures of "this graphic imagery" would be banned, yet such enthusiastic crackdowns like this are often applied inconsistently.

The overall impression is that Twitter wants to be a space where users feel they can trust the links that they see selling stuff, and know that they won't get a virus from clicking on them. Or, it's an online space where abusive and shocking behaviour is only dealt with when it affects a prominent celebrity or public figure whose public egress from Twitter might affect user trust - as with Robin Williams' daughter Zelda, who was driven from Twitter by behaviour which thousands of other women experience daily. This doesn't make it any less abhorrent, but it is disheartening that it takes an example so impossible to ignore for something to be done about the problemIn that sense, it's perhaps wise to be wary of yet more changes to Twitter which make it harder, not easier, for users to define what they experience online.

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

“Stinking Googles should be killed”: why 4chan is using a search engine as a racist slur

Users of the anonymous forum are targeting Google after the company introduced a programme for censoring abusive language.

Contains examples of racist language and memes.

“You were born a Google, and you are going to die a Google.”

Despite the lack of obscenity and profanity in this sentence, you have probably realised it was intended to be offensive. It is just one of hundreds of similar messages posted by the users of 4chan’s Pol board – an anonymous forum where people go to be politically incorrect. But they haven’t suddenly seen the error of their ways about using the n-word to demean their fellow human beings – instead they are trying to make the word “Google” itself become a racist slur.

In an undertaking known as “Operation Google”, some 4chan users are resisting Google’s latest artificial intelligence program, Conversation AI, by swapping smears for the names of Google products. Conversation AI aims to spot and flag offensive language online, with the eventual possibility that it could automatically delete abusive comments. The famously outspoken forum 4chan, and the similar website 8chan, didn’t like this, and began their campaign which sees them refer to “Jews” as “Skypes”, Muslims as “Skittles”, and black people as “Googles”.

If it weren’t for the utterly abhorrent racism – which includes users conflating Google’s chat tool “Hangouts” with pictures of lynched African-Americans – it would be a genius idea. The group aims to force Google to censor its own name, making its AI redundant. Yet some have acknowledged this might not ultimately work – as the AI will be able to use contextual clues to filter out when “Google” is used positively or pejoratively – and their ultimate aim is now simply to make “Google” a racist slur as revenge.


Posters from 4chan

“If you're posting anything on social media, just casually replace n****rs/blacks with googles. Act as if it's already a thing,” wrote one anonymous user. “Ignore the company, just focus on the word. Casually is the important word here – don't force it. In a month or two, Google will find themselves running a company which is effectively called ‘n****r’. And their entire brand is built on that name, so they can't just change it.”

There is no doubt that Conversation AI is questionable to anyone who values free speech. Although most people desire a nicer internet, it is hard to agree that this should be achieved by blocking out large swathes of people, and putting the power to do so in the hands of one company. Additionally, algorithms can’t yet accurately detect sarcasm and humour, so false-positives are highly likely when a bot tries to identify whether something is offensive. Indeed, Wired journalist Andy Greenberg tested Conversation AI out and discovered it gave “I shit you not” 98 out of 100 on its personal attack scale.

Yet these 4chan users have made it impossible to agree with their fight against Google by combining it with their racism. Google scores the word “moron” 99 out of 100 on its offensiveness scale. Had protestors decided to replace this – or possibly even more offensive words like “bitch” or “motherfucker” – with “Google”, pretty much everyone would be on board.

Some 4chan users are aware of this – and indeed it is important not to consider the site a unanimous entity. “You're just making yourselves look like idiots and ruining any legitimate effort to actually do this properly,” wrote one user, while some discussed their concerns that “normies” – ie. normal people – would never join in. Other 4chan users are against Operation Google as they see it as self-censorship, or simply just stupid.


Memes from 4chan

But anyone who disregards these efforts as the work of morons (or should that be Bings?) clearly does not understand the power of 4chan. The site brought down Microsoft’s AI Tay in a single day, brought the Unicode swastika (卐) to the top of Google’s trends list in 2008, hacked Sarah Palin’s email account, and leaked a large number of celebrity nudes in 2014. If the Ten Commandments were rewritten for the modern age and Moses took to Mount Sinai to wave two 16GB Tablets in the air, then the number one rule would be short and sweet: Thou shalt not mess with 4chan.

It is unclear yet how Google will respond to the attack, and whether this will ultimately affect the AI. Yet despite what ten years of Disney conditioning taught us as children, the world isn’t split into goodies and baddies. While 4chan’s methods are deplorable, their aim of questioning whether one company should have the power to censor the internet is not.

Google also hit headlines this week for its new “YouTube Heroes” program, a system that sees YouTube users rewarded with points when they flag offensive videos. It’s not hard to see how this kind of crowdsourced censorship is undesirable, particularly again as the chance for things to be incorrectly flagged is huge. A few weeks ago, popular YouTubers also hit back at censorship that saw them lose their advertising money from the site, leading #YouTubeIsOverParty to trend on Twitter. Perhaps ultimately, 4chan didn't need to go on a campaign to damage Google's name. It might already have been doing a good enough job of that itself.

Google has been contacted for comment.

Amelia Tait is a technology and digital culture writer at the New Statesman.