Twitter's founders launch two new services. What are they, and do they have a hope?

Medium and Branch could be third (and fourth) time lucky for Stone and Williams.

Ev Williams and Biz Stone, the co-founders of Blogger (now owned by Google) and Twitter, have launched not one, but two follow-up projects, Medium and Branch.

The two men will be staying on as directors of Twitter, which poses a problem for them commercially – how do they use their expertise to carry on the string of hits, without cannibalising their previous business? One of the Twitter's cofounders, Jack Dorsey, decided to abandon the social media sector entirely, instead attacking two monopolies at once with Square, a platform which allows anyone to accept credit card payments with just an iPhone or iPad.

And with their audacious move to launch two start-ups at the same time, Stone and Williams have that problem doubled. Will people really use Twitter, Medium and Branch at the same time? That's the plan.

What are they?

First things first: what exactly are the new platforms? It's always going to be tricky to describe these things until usage patterns have built up around them organically. You inevitably find yourself resorting to analogies with existing services, which can be far from perfect: I remember, in 2007, attempting to describe Twitter to someone as "like Facebook status updates but without the rest of Facebook". Needless to say, I did not convince them to sign up.

Stone and Williams seem to have a firmer idea of what a mature Branch and Medium will look like than they did with Twitter, however. The latter famously was heavily driven by its users, with conventions like hashtags, retweets and @-mentions invented on-the-fly, and then incorporated into the architecture of the site later on. The way people use it today bears little resemblance to the way they did five years ago.

Medium is a very image-centric platform for content grouped around specific themes. The idea is that users create certain "collections", which are grouped around a theme. Sometimes, these collections are closed, but they can be open to extra contributions. Williams explains (on Medium, of course):

Collections give people context and structure to publish their own stories, photos, and ideas. By default, the highest-rated posts show up at the top, helping people get the most out of their time in this world of infinite information.

Together, the contributions of many add up to create compelling and useful experiences. You may be inspired to post one time or several times a day—either way is okay. If you’re more ambitious, you might create a collection of your own.

Collections exist on topics like editorials, things people have made, nostalgic photos and crazy stories, while the site has a voting function which, ideally, ensures that interesting contributions to those collections float to the top.

Although the design is focused around images, and reminiscent of Pinterest in its gridded layout, posts can be all text, and can indeed be quite weighty. In terms of the (small-m) medium, Medium looks to be encouraging a similar approach to Tumblr (although with much more high-brow content, ideally). Lots of images, some text, and a few links out. The idea is that the individual posts become something more when the group as a whole takes over.

Branch is far more about the conversation as a whole. At its heart lies a question and answer format similar to Quora, another Silicon Valley darling. Users start conversations with an opening post, and can then invite others to join in. The chats are readable by anyone, but only invited users can contribute - but, importantly, anyone can click on any post to "branch" it into its own thread.

Topics being discussed at the moment include today's changes to Twitter's platform, TEDx, an offshoot from the popular TED conferences, and Obama's re-election prospects.

It's easier to describe than Medium, but that's partially because it's a far simpler service. It knows what it wants to be, but there's far less chance for users to discover.

How do they work with Twitter?

If it wasn't clear before that these sites need to work with Twitter, rather than against it, the company today announced changes to the way they deal with third-party apps and services which appear to be a precursor to banning many of them from the network entirely.

Branch is most explicit about how it would mesh with Twitter. It sees itself as a way to take those long, unwieldy five- or six-participant conversations off-site to somewhere where arguments can be developed in a bit more length. As seen in this discussion, it even encourages you to embed tweets to begin the chat.

Medium targets itself at a different sector. It still links to Twitter - right now, the only way to sign up for an account is to use your Twitter account, for instance - but there are few explicit connections between the two services. Its target is different, lying somewhere between Tumblr and Pinterest. The most interesting claim the founders make about it is that it will not require massive engagement to get noticed on - which is a problem with both those sites. If everything works as stated, then a first post could become the most "interesting" one on the most-read board. In this, as with its voting mechanic, Matter actually bears more than a passing resemblance to Reddit. Submit cool things, get up-votes, and be read by the crowd, all of which is fragmented over boards which anyone can create.

Reddit, of course, co-exists admirably with Twitter, so there should be no problem there.

When I wrote on Twitter's API changes, I argued that even worse than the ill-thought out rules being strictly applied is if they aren't strictly applied – if, as there are indications, Twitter gives "good" sites an easier ride.

Sadly, Branch just adds to that notion. While the site will doubtless play well with Twitter, it breaks several of the company's design guidelines (soon to become design requirements). Tweets are displayed without retweet, reply, or favourite buttons, names are displayed without the username next to them, and the Twitter logo is not always displayed in the top right corner. Despite this, something tells me it will not have its API access revoked.

Do they have a hope?

The real question, of course, is whether these things can grow beyond the initial hype. Are they filling niches that need to be filled? Can they encourage users to switch from competing services? And will they work as they scale?

Of the two, Branch is the one which has the more obvious chance of success. It is easy to imagine people saying "let's take this to Branch" when a conversation on Twitter gets out of hand, and the integration between the two services makes that something even the least technologically-minded user can do. Obviously the "featured branches" view of the site would gradually fade into the background as it grew, just as you can't get a whole site feed for Twitter anymore, but this is to be expected; as Dalton Cadwell argued, the global feed is useful for avoiding anti-network effects (where a site gets less useful the more people are on it; compare, for example, Yahoo! Answers and Quora) in a growing site, but useless once something reaches critical mass.

Medium is a different beast entirely. Its problem is getting people to use it. Is it a Tumblr replacement? Pinterest? How should you get content into, and out, of it? Is it for ephemeral posts, or will it have a working archive?

Yet if it does work out – if people do start sharing wonderful things, and telling each other "nice work!" (the equivalent of an up-vote, to use the Reddit analogy) – then Medium has a chance of being, not just a useful addendum to other social networks, but a hub in its own right. Reddit has 35 million users, and an incredibly engaged community. Who wouldn't want a piece of that?


Medium and Branch.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.