New web security system tests computers' emotions

Sorting the men from the replicants.

A new Captcha system seeks to separate humans from computers by testing empathy – and spreading awareness of human rights human rights abuses at the same time.

A Captcha – which stands for Completely Automated Public Turing test to tell Computers and Humans Apart – is the test used when logging into many sites to distinguish between real people and malicious programs, which may attempt to log into many thousands of accounts at the same time. You've all used one – signing up for a New Statesman commenting account, if nowhere else – and they are ripe for being put to good use.

ReCAPTCHA was the first socially-beneficial captcha, and still the most popular. It uses the combined might of all the human brain power wasted on Captchas to transcribe scanned books:

reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly.

Since it took off, ReCAPTCHA has been used on innumerable sites, and is now displayed over 100 million times a day. But that success comes at a price. Now that the low hanging fruit has been plucked, fewer and fewer easily-transcribable words remain in its corpus, meaning that the system regularly throws up completely unintelligible words, words in other scripts, or things which just aren't language at all.

The civil rights captcha wants to be the replacement. Rather than using the captcha to perform useful work, like reCAPTCHA, it uses it to raise awareness about important issues:

Instead of visually decoding an image of distorted letters, the user has to take a stand regarding facts about human rights. Depending on whether the described situation is positively or negatively charged, the CAPTHA generates three random words from a database. These words describe positive and negative emotions. The user selects the word that best matches how they feel about the situation, and writes the word in the CAPTCHA. Only one answer is correct, the answer showing compassion and empathy.

As well as being important socially – example questions include "The parliament in St. Petersburg recently passed a law that forbids "homosexual propaganda". How does that make you feel?" – the Civil Rights Captcha is stronger against attack as well. It includes the same visual element as a reCAPTCHA, requiring potential attackers to decipher obfuscated words, but also requires any automated attack to parse a complex question, pick the right emotion, and only then work out which of the proffered words match that emotion.

The whole thing is rather reminiscent of Blade Runner:

We'll catch those pesky replicants yet.

Rutger Hauer, in the film Blade Runner.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Getty
Show Hide image

Fark.com’s censorship story is a striking insight into Google’s unchecked power

The founder of the community-driven website claims its advertising revenue was cut off for five weeks.

When Microsoft launched its new search engine Bing in 2009, it wasted no time in trying to get the word out. By striking a deal with the producers of the American teen drama Gossip Girl, it made a range of beautiful characters utter the words “Bing it!” in a way that fell clumsily on the audience’s ears. By the early Noughties, “search it” had already been universally replaced by the words “Google it”, a phrase that had become so ubiquitous that anything else sounded odd.

A screenshot from Gossip Girl, via ildarabbit.wordpress.com

Like Hoover and Tupperware before it, Google’s brand name has now become a generic term.

Yet only recently have concerns about Google’s pervasiveness received mainstream attention. Last month, The Observer ran a story about Google’s auto-fill pulling up the suggested question of “Are Jews evil?” and giving hate speech prominence in the first page of search results. Within a day, Google had altered the autocomplete results.

Though the company’s response may seem promising, it is important to remember that Google isn’t just a search engine (Google’s parent company, Alphabet, has too many subdivisions to mention). Google AdSense is an online advertising service that allows many websites to profit from hosting advertisements on its pages, including the New Statesman itself. Yesterday, Drew Curtis, the founder of the internet news aggregator Fark.com, shared a story about his experiences with the service.

Under the headline “Google farked us over”, Curtis wrote:

“This past October we suffered a huge financial hit because Google mistakenly identified an image that was posted in our comments section over half a decade ago as an underage adult image – which is a felony by the way. Our ads were turned off for almost five weeks – completely and totally their mistake – and they refuse to make it right.”

The image was of a fully-clothed actress who was an adult at the time, yet Curtis claims Google flagged it because of “a small pedo bear logo” – a meme used to mock paedophiles online. More troubling than Google’s decision, however, is the difficulty that Curtis had contacting the company and resolving the issue, a process which he claims took five weeks. He wrote:

“During this five week period where our ads were shut off, every single interaction with Google Policy took between one to five days. One example: Google Policy told us they shut our ads off due to an image. Without telling us where it was. When I immediately responded and asked them where it was, the response took three more days.”

Curtis claims that other sites have had these issues but are too afraid of Google to speak out publicly. A Google spokesperson says: "We constantly review publishers for compliance with our AdSense policies and take action in the event of violations. If publishers want to appeal or learn more about actions taken with respect to their account, they can find information at the help centre here.”

Fark.com has lost revenue because of Google’s decision, according to Curtis, who sent out a plea for new subscribers to help it “get back on track”. It is easy to see how a smaller website could have been ruined in a similar scenario.


The offending image, via Fark

Google’s decision was not sinister, and it is obviously important that it tackles things that violate its policies. The lack of transparency around such decisions, and the difficulty getting in touch with Google, are troubling, however, as much of the media relies on the AdSense service to exist.

Even if Google doesn’t actively abuse this power, it is disturbing that it has the means by which to strangle any online publication, and worrying that smaller organisations can have problems getting in contact with it to solve any issues. In light of the recent news about Google's search results, the picture painted becomes more even troubling.

Update, 13/01/17:

Another Google spokesperson got in touch to provide the following statement: “We have an existing set of publisher policies that govern where Google ads may be placed in order to protect users from harmful, misleading or inappropriate content.  We enforce these policies vigorously, and taking action may include suspending ads on their site. Publishers can appeal these actions.”

Amelia Tait is a technology and digital culture writer at the New Statesman.