An Apple iPad with Twitter's native app. Photo: Peter Macdiarmid/Getty Images
Show Hide image

Twitter's taking away your control over what tweets you choose to see

A subtle change in how Twitter's feed works will make some people very angry, but most people probably won't even notice.

Twitter users will this week notice a strange new thing happening to their timelines - it's not theirs any more. Tweets from people they don't follow, and who the people they follow haven't chosen to retweet, are now appearing in timelines under the guise of being "relevant and interesting".

Here's how Twitter is now describing itself, on its "what's a Twitter timeline?" about page:

When you sign in to Twitter, you'll land on your home timeline.

  • Your home timeline displays a stream of Tweets from accounts you have chosen to follow on Twitter.
  • Additionally, when we identify a Tweet, an account to follow, or other content that's popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don't follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

It doesn't take long to find how much users are hating this change:

It's pretty obvious why this is so annoying - favourites function in a very different way to retweets. Here's a dumb Buzzfeed list of "17 Types of Twitter Fave" - ignore some of the sillier ones like the accidental self-fave, and it's still clear there's a lot a favourite can mean. I use it for bookmarking stuff for later that I then might want to retweet if I think it's worth it, but there are plenty of other times users won't want a favourite to be automatically pumped into their followers' timelines. They might fave a job advert, for example, or a tweet critical of someone they know to remind themselves of it even if they disagree. Now, a pseudo-private clearing house for public activity is itself also public.

However, the reason for the change is simple: Twitter will make more money if it gets more people tweeting, and people are more likely to tweet if they see stuff they can tweet about.

At the moment there's a clear difference between the types of service that Twitter and Facebook offer: the former's is comprehensive, while the latter's is curatorial. Facebook's news feed did, once upon a time, list nothing more than the activity of a user's friends - that's things like wall posts, shared links, adding new friends, that sort of thing - but very quickly began using algorithmic guesses to insert extra stuff that it thought was relevant. The news feed these days is less like a place to get updates from friends, and more a streak of vomit - you know that there are probably some quite nice things that were ingested in the beginning, but the recommendations that came back up range were not particularly welcome.

A good illustration of this is Mat Honan's Wired piece where he liked everything he saw on Facebook for two days. The result was that it not only quickly become completely unusable, pumping out links to far-right political sites and clickbait listicles that crowded out any of his friends' activities, but he also ruined Facebook for everyone who was friends with him - the algorithms, after all, assume that word-of-mouth is the best recommendation engine that exists, and so treats the things your friends like as things you will likely also like. Your ability to control what Facebook shows you is negligible.

Twitter, by contrast, has kept this kind of manipulation to a minimum. Following someone on Twitter means that every tweet they make will appear in the timeline, as it happens. The people who really love Twitter tend to also dislike Facebook for this reason. Trying to follow world events in real-time is easier with a platform that treats every voice the same, and which doesn't let the actions of one user influence the timeline of another.

Except, of course, it does. The retweet function - where a user can republish a tweet for all of their followers as if they themselves follow the retweeted account - does allow some cross-contamination, and was introduced in 2009 as a more "natural" version of the manual method which had organically emerged when Twitter first launched. People hated it at first, too, for allowing "strangers in my stream". Then there are promoted tweets, too - anyone can pay to have their tweet show up in the timelines of strangers. People hated them too (they still hate them), but, since it was obvious Twitter would have to find a way to make money to support its free service, these tweets have become seen as a necessary evil.

There's a problem that every social network has to struggle with, and Twitter is no exception: how much to poke users into doing stuff they otherwise might not. Most people who use Twitter - we're talking millions of users - sign up, follow a few friends and relatives and a couple of celebrities, and then don't particularly get involved any more than that. This is the effect of respecting the user's ability to curate their own timeline. They act like bubbles, floating in isolation past each other while never mixing.

That's not good enough for a business like Twitter, which has been struggling to match the growth in users and revenues that it predicted in its IPO in November last year. The six months from December to July saw its stock fall in value by 47 per cent, when it then rebounded after an encouraging uptick in user growth and a reduction in losses. In large part this new confidence from investors is based on the idea that somehow, in the future, Twitter will crack a way of making money - just as Facebook has. That's why Twitter keeps experimenting, from making it easier to embed tweets in other websites to introducing all kinds of themed content for big events like the World Cup (remember the flags?). 

And, fundamentally, that's why it makes business sense to turn the favourite function on Twitter into a kind of "I'm Feeling Lucky" retweet, or to let users see popular tweets from the people who the people they follow follow. It needs to keep its investors happy by converting those millions of registered users into active users, defined as those who log on at least once a month. Currently growth in that number is around six per cent, which isn't fast enough. More promising, instead, is to figure out how to convince the non-active users to become more "engaged". 85 per cent of those who stop using Twitter claim it's because they had less than 30 followers, and 76 per cent of people say that they found Twitter's lack of filtering and sorting functions offputting. Those are the kinds of figures that demand changes to a platform's functionality.

Those users who use third-party apps or clients like Tweetdeck or Tweetbot to check Twitter won't see this change - and it's notable that promoted tweets don't appear in those apps either. (There's no word yet from Twitter if the new favourite/retweet hybrid will appear in every iteration of the timeline, or if power users will be able to opt out indefinitely this way.) It's tempting, then, to dismiss the most vociferous critics of the change as those who are merely annoyed by any change at all - it worked just fine before, after all - and that's not an unfair criticism. It's not a policy change that is arguably more symbolically important for taking away some user choice than it is functionally.

However, Twitter's growing pains aren't limited just to its timelines. The question of what content should be permissible in tweets has always been an issue, and it is becoming increasingly more worrisome as it becomes clear that online harrassment and bullying are depressingly suited to the medium. Twitter's introduction of auto-previewed images to timelines was roundly-criticised for making shocking and disturbing images harder to avoid, and the process for reporting abusive behaviour is notoriously long-winded and complex - much more so than for spammers. In the light of today's news that an American photojournalist, James Foley, has been murdered by Isis militants, the Twitter CEO Dick Costolo tweeted that any accounts actively sharing videos or pictures of "this graphic imagery" would be banned, yet such enthusiastic crackdowns like this are often applied inconsistently.

The overall impression is that Twitter wants to be a space where users feel they can trust the links that they see selling stuff, and know that they won't get a virus from clicking on them. Or, it's an online space where abusive and shocking behaviour is only dealt with when it affects a prominent celebrity or public figure whose public egress from Twitter might affect user trust - as with Robin Williams' daughter Zelda, who was driven from Twitter by behaviour which thousands of other women experience daily. This doesn't make it any less abhorrent, but it is disheartening that it takes an example so impossible to ignore for something to be done about the problemIn that sense, it's perhaps wise to be wary of yet more changes to Twitter which make it harder, not easier, for users to define what they experience online.

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Photo: Getty
Show Hide image

Move objects with your mind – telekinesis is coming to a human brain near you

If a user puts on the Neurable headset, they can move virtual objects with their thoughts. 

On 30 July, a blog post on Medium by Michael Thompson, the vice-president of Boston-based start-up Neurable, said his company had perfected a kind of technology which would be “redrawing the boundaries of human experience”. 

Neurable had just fulfilled the pipe dreams of science fiction enthusiasts and video game fanboys, according to Thompson – it had created a telekinetic EEG strap. In plain English, if a user puts on the Neurable headset, and plays a specially-designed virtual reality video game, they can move virtual objects with their thoughts. 

Madrid-based gaming company eStudioFuture collaborated with Neurable to create the game, Awakening. In it, the user breaks out of a government lab, battles robots and interacts with objects around them, all hands-free with Neurable's headset. Awakening debuted at SIGGRAPH, a computer graphics conference in Boston, where it was well received by consumers and investors alike.

The strap (or peripheral, as it’s referred to) works by modifying the industry standard headset of oversized goggles. Neurable's addition has a comb-like structure that reaches past your hair to make contact with the scalp, then detects brain activity via electroencephalogram (EEG) sensors. These detect specific kinds of neural signals. Thanks to a combination of machine-learning software and eye-tracking technology, all the user of the headset has to do is think the word “grab”, and that object will move – for example, throwing a box at the robot trying to stop you from breaking out of a government lab. 

The current conversation around virtual reality, and technologies like it, lurches between optimism and cynicism. Critics have highlighted the narrow range of uses that the current technology is aimed at (think fun facial filters on Snapchat). But after the debut of virtual reality headsets Oculus Rift and HTC Vive at 2016’s Game Developers conference, entrepreneurs are increasingly taking notice of virtual reality's potential to make everyday life more convenient.

Tech giants such as Microsoft, Facebook and Google have all been in on the game since as far back as 2014, when Facebook bought Oculus (of Oculus Rift). Then, in 2016, Nintendo and Niantic (an off-shoot from Google) launched Pokémon Go. One of Microsoft’s leading technical fellows, Alex Kipman, told Polygon that distinctions between virtual reality, augmented reality and mixed reality were arbitrary: "At the end of the day, it’s all on a continuum." 

Oculus’s Jason Rubin has emphasised the potential that VR has to make human life that much more interesting or efficient. Say that you're undergoing a home renovation – potentially, with VR technology, you could pop on your headset and see a hologram of your living room. You could move your virtual furniture around with minimal effort, and then do exactly the same in reality – in half the time and effort. IKEA already offers a similar service in store – imagine being able to do it yourself.

Any kind of experience that is in part virtual reality – from video games to online tours of holiday destinations to interactive displays at museums – will become much more immersive.

Microsoft’s Hololens is already being trialled at University College London Hospital, where students can study detailed holograms of organs, and patients can get an in-depth look at their insides projected in front of them (Hololens won’t be commercially available for a while.) Neurable's ambitions go beyond video games – its headset was designed by neuroscientists who had spent years working in neurotechnology. It offers the potential for important scientific and technological breakthroughs in areas such as prosthetic limbs. 

Whether it was a childhood obsession with Star Wars or out of sheer laziness, as a society, we remain fascinated by the thought of being able to move objects with our minds. But in actual realityVR and similar technologies bring with them a set of prickly questions.

Will students at well-funded schools be able to get a more in-depth look at topography in a geography lesson through VR headsets than their counterparts elsewhere? Would companies be able to maintain a grip on what people do in virtual reality, or would people eventually start to make their own (there are already plenty of DIY tutorials on the internet)? Will governments be able to regulate and monitor the use of insidious technology like augmented reality or mixed reality, and make sure that it doesn't become potentially harmful to minors or infringe on privacy rights? 

Worldwide spending on items such as virtual reality headsets and games is forecast to double every year until 2021, according to recent figures. Industry experts and innovators tend to agree that it remains extremely unlikely you’ll walk into someone examining a hologram on the street. All the same, VR technology like Neurable’s is slowly creeping into the fabric of our lived environment.