The Aaron Swartz lesson: how undeveloped laws target the vulnerable

A tragedy, with a powerful moral.

On Friday 11 January, Aaron Swartz was found dead at his apartment in New York. He was 26. The following day, Tim Berners-Lee, creator of the World Wide Web, tweeted: “Aaron dead. World wanderers, we have lost a wise elder. Hackers for right, we are one down. Parents all, we have lost a child. Let us weep.”

The response to his death by suicide was overwhelming but unsurprising – Swartz had been an internet legend since his teenage years. At 14, he helped to put together RSS – technology that is part of the backbone of the web. While still in his teens, he played a vital role in creating Reddit, the hugely popular networking news site, and shared the profits when it was later bought by Condé Nast.

Swartz was a hero to activists pushing for open access to content on the internet, working to create a free public library and founding Demand Progress – a pressure group that successfully campaigned against the Stop Online Piracy Act. He was also an inspiration to many.

His friend Lawrence Lessig, a Harvard professor, wrote: “He was brilliant, and funny. A kid genius. A soul, a conscience, the source of a question I have asked myself a million times: What would Aaron think?”

Then there were the stunts. At one point, Swartz made about 20 per cent of US case law available on the web for free. Although it was officially in the “public domain”, the system that categorised it – Pacer – charged a fee to everyone who tried to access it. Activists created Recap, a database that collected what people had already bought and gave it to others for free. Through this – devised at his own expense – Swartz moved a large amount of data on to the web. He was pursued by the FBI but it dropped the charges. The rumour was it bore a grudge.

The big problems started when Swartz crept into the Massachusetts Institute of Technology with a laptop and started downloading millions of academic journal articles from the subscription-only service JSTOR. At the time he was charged, he hadn’t yet distributed them. And he never intended to make money from any of it.

However, US government prosecutors hit him with the harshest possible penalties. Swartz ended up facing more than 30 years in jail, trapped by laws that had been designed to deal with organised criminals, bank robbers and those who steal corporate information for profit.

“Stealing is stealing,” said the federal attorney Carmen Ortiz, speaking for the prosecution at the time, “whether you use a computer command or a crowbar, and whether you take documents, data or dollars.”

Her phrasing echoes the much-mocked anti-piracy ads that begin “You wouldn’t steal a car . . . You wouldn’t steal a handbag” and feature sirens wailing and cops approaching as a schoolchild tries to download a copy of what is probably Mean Girls off Pirate Bay. Those ads are mocked for a reason. Downloading a film (or an article) is self-evidently not the same as stealing one from a shop. For one thing, the precise laws governing online behaviour are ill-defined and badly enforced. And when the laws are enforced, it seems random, unforeseeable and badly out of proportion.

Graham Smith, an IT and copyright lawyer for the international legal firm Bird & Bird, says that the law governing the digital world is very much “in a state of development” and, as a result, “One should be very careful about criminalising things online. Criminal law is a blunt instrument.”

But we have not been careful with these laws – in the UK as well as in the US – and they seem to have hit only the vulnerable. Take Glenn Mangham, a British student who hacked into Facebook just to see if he could. He did nothing with the information. “It was to expose vulnerabilities in the system,” Mangham told the crown court. He was jailed for eight months.

One of the saddest ironies of this story is that Swartz spent his life trying to show everyone just how unreasonable laws can become when they are rigidly applied to the internet. Last year, he identified an ongoing “battle” over copyright law, “a battle to define everything that happens on the internet in terms of traditional things that the law understands”. If the battle was left unresolved, Swartz said, “New technology, instead of bringing us greater freedom, would have snuffed out fundamental rights we’d always taken for granted.”

His suicide was “the product of a criminal justice system rife with intimidation and prosecutorial overreach”, his family said in a statement on 12 January. A tragedy, with a powerful moral.

Aaron Swartz had been an internet legend since his teenage years, Photograph: Getty Images

Martha Gill writes the weekly Irrational Animals column. You can follow her on Twitter here: @Martha_Gill.

This article first appeared in the 21 January 2013 issue of the New Statesman, The A-Z of Israel

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.