New web security system tests computers' emotions

Sorting the men from the replicants.

A new Captcha system seeks to separate humans from computers by testing empathy – and spreading awareness of human rights human rights abuses at the same time.

A Captcha – which stands for Completely Automated Public Turing test to tell Computers and Humans Apart – is the test used when logging into many sites to distinguish between real people and malicious programs, which may attempt to log into many thousands of accounts at the same time. You've all used one – signing up for a New Statesman commenting account, if nowhere else – and they are ripe for being put to good use.

ReCAPTCHA was the first socially-beneficial captcha, and still the most popular. It uses the combined might of all the human brain power wasted on Captchas to transcribe scanned books:

reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly.

Since it took off, ReCAPTCHA has been used on innumerable sites, and is now displayed over 100 million times a day. But that success comes at a price. Now that the low hanging fruit has been plucked, fewer and fewer easily-transcribable words remain in its corpus, meaning that the system regularly throws up completely unintelligible words, words in other scripts, or things which just aren't language at all.

The civil rights captcha wants to be the replacement. Rather than using the captcha to perform useful work, like reCAPTCHA, it uses it to raise awareness about important issues:

Instead of visually decoding an image of distorted letters, the user has to take a stand regarding facts about human rights. Depending on whether the described situation is positively or negatively charged, the CAPTHA generates three random words from a database. These words describe positive and negative emotions. The user selects the word that best matches how they feel about the situation, and writes the word in the CAPTCHA. Only one answer is correct, the answer showing compassion and empathy.

As well as being important socially – example questions include "The parliament in St. Petersburg recently passed a law that forbids "homosexual propaganda". How does that make you feel?" – the Civil Rights Captcha is stronger against attack as well. It includes the same visual element as a reCAPTCHA, requiring potential attackers to decipher obfuscated words, but also requires any automated attack to parse a complex question, pick the right emotion, and only then work out which of the proffered words match that emotion.

The whole thing is rather reminiscent of Blade Runner:

We'll catch those pesky replicants yet.

Rutger Hauer, in the film Blade Runner.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Photo: Getty
Show Hide image

The internet dictionary: what is astroturfing?

Yes, like the fake grass.

Thanks to the internet, there are a lot of new words. You’re most likely up to speed with your LOLs and OMGs, which became Oxford English Dictionary-worthy in 2011 (LOL OMG if you’re not). But words emerge constantly, and it can be hard to keep track of them. This is what this column is for. Every week, I’ll define a word that is crucial to understanding the internet, starting with “astroturfing” – like the fake grass.

To astroturf is to mask the author of a message to make it appear to have come from the grass roots. Messages created by brands, politicians and even the military are disguised as comments made by the public. The practice existed before the web – the term is thought to have been coined in 1985 by a US senator who received a “mountain” of letters from insurance companies posing as the public – but the internet has propelled it to new, disturbing heights.

“GIRLS U NEED TO READ THIS,” reads a tweet by a handsome teenage boy named Ashton, who tweets the same words day after day, followed by crying and heart emojis. Ashton lives to promote the book of a 19-year-old self-published author from Sheffield – or, at least, he would, if he lived at all. Ashton is fake, a profile designed to make the book seem popular. Many teenage girls have been duped by this. One told me: “I felt very cheated out of my money and my time.”

It has been estimated that a third of all consumer reviews online are fake. But it doesn’t end with bad books. In China, the “50 Cent Army” are astroturfers who are allegedly paid a small fee for each positive post they write about the Chinese Communist Party. And in 2011, it emerged that the US military was developing an “online persona management service” to spread pro-American messages, allowing one person to manage multiple online identities.

We would be foolish to assume that our own democracy is immune. Much was written about how the Tories used targeted social media adverts at the last election, and it is easy to see how astroturfing could transform our political landscape for ever. 

Amelia Tait is a technology and digital culture writer at the New Statesman.

This article first appeared in the 10 August 2017 issue of the New Statesman, France’s new Napoleon