Twitter doesn't like you avoiding ads

The social network has announced tough new restrictions on how third-parties can build apps.

Twitter has announced in a post titled Delivering a consistent Twitter experience that developers producing third-party twitter apps need to start including all the major features of the branded Twitter apps and website. Michael Sippey writes:

We’re building tools for publishers and investing more and more in our own apps to ensure that you have a great experience everywhere you experience Twitter, no matter what device you’re using. You need to be able to see expanded Tweets and other features that make Twitter more engaging and easier to use. These are the features that bring people closer to the things they care about. These are the features that make Twitter Twitter. We're looking forward to working with you to make Twitter even better.

The proximal cause of the news is the launch of a new feature on Twitter, expanded tweets, which lets publishers show previews of what a tweet is linking to directly in the interface:

Yet really, the news goes to the heart of Twitter's strategy as a company. Like most companies of its pedigree, it makes money through advertising. It sells tweets, trends, and promotion in the "who to follow" box. But if you use a third party twitter app – that is, any app not made by twitter, like Tweetbot for iPhones, Hootsuite on the web, or Ubersocial on Android – you don't see those.

That is bad enough for the company, but up to now, the users of those apps are a minority on the service. The vast majority of twitterers use the website itself, or one of the official clients on mobile devices. So why should they care that nerds are going to be forced to do what they do normally?

Because Twitter aren't just trying to monetise the users they currently miss out on. They also want to – at the risk of being alarmist – block the exits.

In April 2010, the company acquired the developers of Tweetie, the then-most popular independent app (this was at a time, hard as it is to believe, when they didn't have an official app), and rebranded it as the official app. Less than a year later, they introduced a feature known as the "quickbar". In terms of usability, it was one of the most obnoxious features added to the service since it's inception – an always-on view of the trending topics at the top of the screen which took up valuable space on the small phone.

The quickbar was such a failure that twitter pulled it from the app, in the fear of sparking an exodus to other clients, but at the same time as backtracking on that, the company made its first ominous pronouncement on the future of third-party developers, warning them not to:

Build client apps that mimic or reproduce the mainstream Twitter consumer client experience.

This is, of course, what most apps do – they replace, rather than adding to, what the official client can do – but for the last year, Twitter has stayed quiet on its threats. Until now. Next time Twitter introduces something similar to the quickbar, there will be nowhere to run.

They can take Tweetbot from our phones, but they'll never take it from our hearts. They'll just disable the API so it can't access the site.

The Twitter logo, manipulated.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Cleveland police
Show Hide image

Should Facebook face the heat for the Cleveland shooting video?

On Easter Sunday, a man now dubbed the “Facebook killer” shot and killed a grandfather before uploading footage of the murder to the social network. 

A murder suspect has committed suicide after he shot dead a grandfather seemingly at random last Sunday. Steve Stephens (pictured above), 37, was being hunted by police after he was suspected of killing Robert Godwin, 74, in Cleveland, Ohio.

The story has made international headlines not because of the murder in itself – in America, there are 12,000 gun homicides a year – but because a video of the shooting was uploaded to Facebook by the suspected killer, along with, moments later, a live-streamed confession.

After it emerged that Facebook took two hours to remove the footage of the shooting, the social network has come under fire and has promised to “do better” to make the site a “safe environment”. The site has launched a review of how it deals with violent content.

It’s hard to poke holes in Facebook’s official response – written by Justin Osofsky, its vice president of global operations – which at once acknowledges how difficult it would have been to do more, whilst simultaneously promising to do more anyway. In a timeline of events, Osofsky notes that the shooting video was not reported to Facebook until one hour and 45 minutes after it had been uploaded. A further 23 minutes after this, the suspect’s profile was disabled and the videos were no longer visible.

Despite this, the site has been condemned by many, with Reuters calling its response “bungled” and the two-hour response time prompting multiple headlines. Yet solutions are not as readily offered. Currently, the social network largely relies on its users to report offensive content, which is reviewed and removed by a team of humans – at present, artificial intelligence only generates around a third of reports that reach this team. The network is constantly working on implementing new algorithms and artificially intelligent solutions that can uphold its community standards, but at present there is simply no existing AI that can comb through Facebook’s one billion active users to immediately identify and remove a video of a murder.

The only solution, then, would be for Facebook to watch every second of every video – 100 million hours of which are watched every day on the site – before it goes live, a task daunting not only for its team, but for anyone concerned about global censorship. Of course Facebook should act as quickly as possible to remove harmful content (and of course Facebook shouldn’t call murder videos “content” in the first place) but does the site really deserve this much blame for the Cleveland killer?

To remove the blame from Facebook is not to deny that it is incredibly psychologically damaging to watch an auto-playing video of a murder. Nor should we lose sight of the fact that the act, as well as the name “Facebook killer” itself, could arguably inspire copycats. But we have to acknowledge the limits on what technology can do. Even if Facebook removed the video in three seconds, it is apparent that for thousands of users, the first impulse is to download and re-upload upsetting content rather than report it. This is evident in the fact that the victim’s grandson, Ryan, took to a different social network – Twitter – to ask people to stop sharing the video. It took nearly two hours for anyone to report the video to Facebook - it took seconds for people to download a copy for themselves and share it on.  

When we ignore these realities and beg Facebook to act, we embolden the moral crusade of surveillance. The UK government has a pattern of using tragedy to justify invasions into our privacy and security, most recently when home secretary Amber Rudd suggested that Whatsapp should remove its encryption after it emerged the Westminster attacker used the service. We cannot at once bemoan Facebook’s power in the world and simultaneously beg it to take total control. When you ask Facebook to review all of the content of all of its billions of users, you are asking for a God.

This is particularly undesirable in light of the good that shocking Facebook videos can do – however gruesome. Invaluable evidence is often provided in these clips, be they filmed by criminals themselves or their victims. When Philando Castile’s girlfriend Facebook live-streamed the aftermath of his shooting by a police officer during a traffic stop, it shed international light on police brutality in America and aided the charging of the officer in question. This clip would never have been seen if Facebook had total control of the videos uploaded to its site.  

We need to stop blaming Facebook for things it can’t yet change, when we should focus on things it can. In 2016, the site was criticised for: allowing racial discrimination via its targeted advertising; invading privacy with its facial-scanning; banning breast cancer-awareness videos; avoiding billions of dollars in tax; and tracking non-users activity across the web. Facebook should be under scrutiny for its repeated violations of its users’ privacy, not for hosting violent content – a criticism that will just give the site an excuse to violate people's privacy even further.

No one blames cars for the recent spate of vehicular terrorist attacks in Europe, and no one should blame Facebook for the Cleveland killer. Ultimately, we should accept that the social network is just a vehicle. The one to blame is the person driving.

If you have accidentally viewed upsetting and/or violent footage on social media that has affected you, call the Samaritans helpline on  116 123 or email jo@samaritans.org

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496