Show Hide image

Sex work apps are about more than advertising – they can keep workers safe

Ugly Mugs, a new safety app aimed at sex workers, shows how technology can step in where law enforcement fails.

Matt Haworth was paying a visit to a sex worker charity in Manchester when a brightly coloured bulletin board in the corner caught his eye. It was covered with descriptions of bad punters – those who were abusive with sex workers, or didn’t pay up. “One that really stuck with me was a man who drove around in a Vauxhall, throwing hardboiled eggs at sex workers,” Haworth tells me over the phone, several years after the event. “It preyed on my mind for years. Why did he hardboil them?” 

There are around 80,000 sex workers in the UK, and they’re statistically more likely to be attacked or raped at work than most other groups. Because of their unsure footing in a country where sex work isn’t criminalised, but many related activities like streetwalking or running a brothel are, sex workers are also unlikely to trust the police – and police can be reluctant to help, or keen to clamp down on the profession rather than protect its workers.

The board Haworth saw in Manchester was an analogue version of National Ugly Mugs (NUM), a service run by the UK Network of Sex Work Projects. Now, it protects sex workers from rogue customers via a network of text and email alerts that are tailored to specific regions. The service gave Haworth, who owns a technology company, an idea: what if the sex workers could get these alerts directly to an app, and also use it the app to report back on their own safety?

With his team, Haworth developed the NUM app based on the charity's body of knowledge and feedback from sex workers themeslves. Spreading alerts as quickly as possible is a vital part of the app's offering. As Haworth tells me, the need for it is aptly demonstrated by the case of Thomas Hall, who attacked four sex workers in the course of a single evening in Manchester in 2013. This feature was also inspired by location-based dating apps like Tinder and Grindr. “We wanted to use the same location technology for a very different end,” Haworth tells me.

The app checks incoming numbers with its database of rogue punters, and also features a kind of panic button, which workers can press if they feel unsafe. Again, detail is key: the button feature uses a black background, so the phone doesn’t light up sex workers' faces and attract attention. The button can be used to report bad clients, call the police, or log that the worker felt unsafe so NUM can check in with them later to offer services and support. The app has been tested in Manchester to a positive response, and is currently undergoing a bigger pilot in London. Haworth tells me that the police themselves are supportive of the scheme. 

This would all be moot, of course, if smartphones weren't already part of sex workers' lives  but Haworth found out in focus groups that “many said that the internet and technology were paramount in their work”. Reason Digital, Haworth's company, carried out what he believes is the first dedicated research into sex workers’ smartphone use, and found that somewhere between 30 and 40 per cent of sex workers in Manchester use a smartphone. Anecdotally, Haworth found that escorts and “indoor workers” who don’t walk the streets are more likely to use them, partly because “they get bored – there’s lots of waiting around”.

In fact, over the past few years, there has been a rise in technology services marketed specifically to sex workers. German site Peppr was billed earlier this year as the “Tinder for sex work”: workers can advertise their services, and punters can contact them through the app.

Unlike Ugly Mugs, it’s purely for advertising, and isn’t particularly concerned with workers' safety. I asked a customer service representative if the app acts on reports of violence, and was told that the company reserves the right to block any user, but has only done once for a no-show. “It’s amazing how effective linking people to their address and payment card is,” the representative told me.

Image: Peppr

The rise of apps aimed at sex workers isn’t surprising when you consider that sex workers have used online advertising for about as long as the internet has existed. Margaret Corvid, a New Statesman blogger who works as a dominatrix in Plymouth, tells me that she does all her advertising online on sites like Adult Work (she also receives NUM email alerts and reads them “religiously”).

In the early days of the internet, sex workers used directories like Alta Vista to list their services. Some of these have even survived the rise of Google and are still used by some workers, Corvid tells me, “especially in kink”. Many sex workers advertise, or have advertised, on sites like Craigslist or even Facebook, but these companies have become stricter in shutting down sex work advertising.

Craigslist originally ran an "Adult" listings section, but closed it in 2010 under pressure from the public, yet Corvid argues that the ability to advertise and receive payments online actually makes sex work much safer. Clients email her, then she “insists on a phone call with every client” and takes a security deposit via online payment.

In the US, where sex work is still criminalised, major credit card companies are pulling their services from sex work sites, and in doing so, putting sex workers at risk. This is partly because the ability to advertise online means workers can act alone. “You don’t need a manager or a pimp, and you can set your own prices and choose your own clients,” Corvid says.

Apps like Peppr, which automate the transaction, could arguably make this process less safe, however. Their click-and-go business model doesn’t encourage the kind of screening processes Corvid uses, and the app doesn’t pre-screen clients either.

Online booking and advertising also results in a digital paper trail, which, depending on your jurisdiction, can be a good or a bad thing. In the US, where the law is harsher on sex work, a digital footprint can also be a risk for workers and punters alike. In the UK, it may actually make the work safer. “Right now it's a good thing there's a paper trail, because even though it's almost impossible to get the cops to deal with issues of assault and violence against sex workers, there would be at least some records of the punter through the app system which could be obtainable by authorities,” Corvid says. 

Apps and websites, whether they are for safety or advertising, also offer other, less obvious, benefits for sex workers. “Sex work is a historically isolating occupation,” Corvid tells me, “and technology has really changed that.” Technology allows workers to organise politically when needed, or just swap tips – “like ‘Where do I get this specific type of stocking my client asked for?’”

This was one aspect of sex workers' use of technology that surprised Haworth and his team while they were developing the NUM app. At one meeting, Haworth tells me, a male sex worker in his teens asked quietly: “Are you only going to send out bad news? What about good news?” As a result, the team are including news of new support groups and successful convictions of rogue punters in their updates.

Overall, both old-school listings sites and apps aimed specifically at sex workers are empowering a group traditionally maligned by society, the police, and even, on occasion, its own clients. As Haworth tells me, the NUM app is radical because it’s “decentralised – it lets sex workers look out for each other”. Until our more traditional instiutions get their act together in their dealings with sex workers, this will remain incredibly important.

Barbara Speed is comment editor at the i, and was technology and digital culture writer at the New Statesman, and a staff writer at CityMetric.

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.