Twitter doesn't like you avoiding ads

The social network has announced tough new restrictions on how third-parties can build apps.

Twitter has announced in a post titled Delivering a consistent Twitter experience that developers producing third-party twitter apps need to start including all the major features of the branded Twitter apps and website. Michael Sippey writes:

We’re building tools for publishers and investing more and more in our own apps to ensure that you have a great experience everywhere you experience Twitter, no matter what device you’re using. You need to be able to see expanded Tweets and other features that make Twitter more engaging and easier to use. These are the features that bring people closer to the things they care about. These are the features that make Twitter Twitter. We're looking forward to working with you to make Twitter even better.

The proximal cause of the news is the launch of a new feature on Twitter, expanded tweets, which lets publishers show previews of what a tweet is linking to directly in the interface:

Yet really, the news goes to the heart of Twitter's strategy as a company. Like most companies of its pedigree, it makes money through advertising. It sells tweets, trends, and promotion in the "who to follow" box. But if you use a third party twitter app – that is, any app not made by twitter, like Tweetbot for iPhones, Hootsuite on the web, or Ubersocial on Android – you don't see those.

That is bad enough for the company, but up to now, the users of those apps are a minority on the service. The vast majority of twitterers use the website itself, or one of the official clients on mobile devices. So why should they care that nerds are going to be forced to do what they do normally?

Because Twitter aren't just trying to monetise the users they currently miss out on. They also want to – at the risk of being alarmist – block the exits.

In April 2010, the company acquired the developers of Tweetie, the then-most popular independent app (this was at a time, hard as it is to believe, when they didn't have an official app), and rebranded it as the official app. Less than a year later, they introduced a feature known as the "quickbar". In terms of usability, it was one of the most obnoxious features added to the service since it's inception – an always-on view of the trending topics at the top of the screen which took up valuable space on the small phone.

The quickbar was such a failure that twitter pulled it from the app, in the fear of sparking an exodus to other clients, but at the same time as backtracking on that, the company made its first ominous pronouncement on the future of third-party developers, warning them not to:

Build client apps that mimic or reproduce the mainstream Twitter consumer client experience.

This is, of course, what most apps do – they replace, rather than adding to, what the official client can do – but for the last year, Twitter has stayed quiet on its threats. Until now. Next time Twitter introduces something similar to the quickbar, there will be nowhere to run.

They can take Tweetbot from our phones, but they'll never take it from our hearts. They'll just disable the API so it can't access the site.

The Twitter logo, manipulated.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Getty
Show Hide image

A quote-by-quote analysis of how little Jeremy Hunt understands technology

Can social media giants really implement the health secretary’s sexting suggestions? 

In today’s “Did we do something wrong? No, it was social media” news, Health Secretary Jeremy Hunt has argued that technology companies need to do more to prevent sexting and cyber-bullying.

Hunt, whose job it is to help reduce the teenage suicide rate, argued that the onus for reducing the teenage suicide rate should fall on social media companies such as Facebook and Twitter.

Giving evidence to the Commons Health Committee on suicide prevention, Hunt said: “I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem.”

Pause for screaming and/or tearing out of hair.

Don’t worry though; Hunt wasn’t simply trying to pass the buck, despite the committee suggesting he direct more resources to suicide prevention, as he offered extremely well-thought out technological solutions that are in no way inferior to providing better sex education for children. Here’s a quote-by-quote analysis of just how technologically savvy Hunt is.

***

“I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18…”

Here’s Hunt asking himself a question that he should be asking the actual experts, which is in no way a waste of anybody’s time at all.

“… If that’s a lock that parents choose to put on a mobile phone contract…”

A lock! But of course. But what should we lock, Jeremy? Should teenager’s phones come with a ban on all social media apps, and for good measure, a block on the use of the camera app itself? It’s hard to see how this would lead to the use of dubious applications that have significantly less security than giants such as Facebook and Snapchat. Well done.

“Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.”

Erm, is there? Image recognition technology does exist, but it’s incredibly complex and expensive, and companies often rely on other information (such as URLs, tags, and hashes) to filter out and identify explicit images. In addition, social media sites like Facebook rely on their users to click the button that identifies an image as an abuse of their guidelines, and then have a human team that look through reported images. The technology is simply unable to identify individual and unique images that teenagers take of their own bodies, and the idea of a human team tackling the job is preposterous. 

But suppose the technology did exist that could flawlessly scan a picture for fleshy bits and bobs? As a tool to prevent sexting, this still is extremely flawed. What if two teens were trying to message one another Titian’s Venus for art or history class? In September, Facebook itself was forced to U-turn after removing the historical “napalm girl” photo from the site.

As for the second part of Jezza’s suggestion, if you can’t identify it, you can’t block it. Facebook Messenger already blocks you from sending pornographic links, but this again relies on analysis of the URLs rather than the content within them. Other messaging services, such as Whatsapp, offer end-to-end encryption (EE2E), meaning – most likely to Hunt’s chagrin – the messages sent on them are not stored nor easily accessed by the government.

“I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening.”

Jeremy, Jeremy, Jeremy, Jeremy, can’t you spot your problem yet? You’ve got to stop asking yourself!

There is simply no algorithm yet intelligent enough to identify bullying language. Why? Because we call our best mate “dickhead” and our worst enemy “pal”. Human language and meaning is infinitely complex, and scanning for certain words would almost definitely lead to false positives. As Labour MP Thangam Debbonaire famously learned this year, even humans can’t always identify whether language is offensive, so what chance does an algorithm stand?

(Side note: It is also amusing to imagine that Hunt could even begin to keep up with teenage slang in this scenario.)

Many also argue that because social media sites can remove copyrighted files efficiently, they should get better at removing abusive language. This is a flawed argument because it is easy to search for a specific file (copyright holders will often send social media giants hashed files which they can then search for on their databases) whereas (for the reasons outlined above) it is exceptionally difficult for algorithms to accurately identify the true meaning of language.

“I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.”

Leaving aside the fact that social media companies constantly come up with solutions for these problems, Hunt has left us with the burning question of whether any of this is even desirable at all.

Why should he prevent under-18s from sexting when the age of consent in the UK is 16? Where has this sudden moral panic about pornography come from? Are the government laying the ground for mass censorship? If two consenting teenagers want to send each other these aubergine emoji a couple of times a week, why should we stop them? Is it not up to parents, rather than the government, to survey and supervise their children’s online activities? Would education, with all of this in mind, not be the better option? Won't somebody please think of the children? 

“There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

Alas, if only we could say the same for you Mr Hunt.

Amelia Tait is a technology and digital culture writer at the New Statesman.