The Supreme Court case which didn't break the internet

Do you "copy" a website just by reading it? No, thankfully.

The Supreme Court has ruled on NLA v PRCA, the case which could break, or save, the internet.

Some background: the Newspaper Licensing Agency took Meltwater, a media monitoring firm to court over whether or not it had to pay licence fees for sending links to its customers. Traditionally, monitoring firms had to pay the licensing agency for the right to distribute clippings of newspapers, because photocopying a newspaper is clearly an act of copying that requires a license. But as everything moved online, that clarity became blurred; and hence, a court case was brought.

We first reported on the case after it made it to the High Court in August, when an astonishingly bad precedent was set. It was ruled that viewing a website on a computer was an act of copying which required a license, just as if you had photocopied a newspaper. Although the ruling was made with regards to a specific scenario, it was general enough to apply to general use of the internet. Clicking on a link, even one which lead to entirely legal content, would, under that ruling, constitute copyright infringement. At the time, I said it "[put] at risk the basic skeleton of the internet."

Thankfully, the case was appealed to the Supreme Court (by the PRCA, a trade body of which Meltwater is a member), where it was ruled today that temporary copies made solely for the purpose of viewing copyrighted material are not infringing. The decision extends copyright exemption to "temporary copies made for the purpose of browsing by an unlicensed end-user", according to the judgement. It is based on European law which "identified very clearly the problem which has arisen" in this case, but which didn't quite specify that this particular method of viewing was covered. Once it is accepted that that law does cover the temporary copies made in this case, "much of the argument which the courts below accepted unravels."

Writing for the majority, Lord Sumption also accepted that the previous ruling would have had wide-ranging effects:

The issue has reached this court because it affects the operation of a service which is being made available on a commercial basis. But the same question potentially affects millions of non-commercial users of the internet who may, no doubt unwittingly, be incurring civil liability by viewing copyright material on the internet without the authority of the rights owner, for example because it has been unlawfully uploaded by a third party. Similar issues arise when viewers watch a broadcast on a digital television or a subscription television programme via a set-top box.

Since the ruling has implications for European law, it has been referred to the European Courts of Justice, which will now consider the question before any final ruling is issued by the Supreme Court.

Until then, and hopefully after, you can continue to use your computers as you were. Carry on.

Photograph: Getty Images

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Flickr: B.S.Wise/YouTube
Show Hide image

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content - or an evil, tyrannical censor for restricting access to LGBT videos?

YouTube is having a bad week. The Google-owned video-sharing platform has hit the headlines twice over complaints that it 1) is not censoring things enough, and 2) is censoring things too much.

On the one hand, big brands including Marks & Spencer, HSBC, and RBS have suspended their advertisements from the site after a Times investigation found ads from leading companies – and even the UK government – were shown alongside extremist videos. On the other, YouTubers are tweeting #YouTubeIsOverParty after it emerged that YouTube’s “restricted mode” (an opt-in setting that filters out “potentially objectionable content”) removes content with LGBT themes.

This isn’t the first time we’ve seen a social media giant be criticised for being a lax, morally irresponsible slacker and an evil, tyrannical censor and in the same week. Last month, Facebook were criticised for both failing to remove a group called “hot xxxx schoolgirls” and for removing a nude oil painting by an acclaimed artist.

That is not to say these things are equivalent. Quite obviously child abuse imagery is more troubling than a nude oil painting, and videos entitled “Jewish People Admit Organising White Genocide” are endlessly more problematic than those called “GAY flag and me petting my cat” (a highly important piece of content). I am not trying to claim that ~everything is relative~ and ~everyone deserves a voice~. Content that breaks the law must be removed and LGBT content must not. Yet these conflicting stories highlight the same underlying problem: it is a very bad idea to trust a large multibillion pound company to be the arbiter of what is or isn’t acceptable.

This isn’t because YouTube have some strange agenda where it can’t get enough of extremists and hate the LGBT community. In reality, the company’s “restricted mode” also affects Paul Joseph Watson, a controversial YouTuber whose pro-Trump conspiracy theory content includes videos titled “Islam is NOT a Religion of Peace” and “A Vote For Hillary is a Vote For World War 3”, as well as an interview entitled “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”. The issue is that if YouTube did have this agenda, it would have complete control over what it wanted the world to see – and not only are we are willingly handing them this power, we are begging them to use it.

Moral panics are the most common justification for extreme censorship and surveillance methods. “Catching terrorists” and “stopping child abusers” are two of the greatest arguments for the dystopian surveillance measures in Theresa May’s Investigatory Powers Act and Digital Economy Bill. Yet in reality, last month the FBI let a child pornographer go free because they didn’t want to tell a court the surveillance methods they used to catch him. This begs the question: what is the surveillance really for? The same is true of censorship. When we insist that YouTube stop this and that, we are asking it to take complete control – why do we trust that this will reflect our own moral sensibilities? Why do we think it won't use this for its own benefit?

Obviously extremist content needs to be removed from YouTube, but why should YouTube be the one to do it? If a book publisher released A Very Racist Book For Racists, we wouldn’t trust them to pull it off the shelves themselves. We have laws (such as the Racial and Religious Hatred Act) that ban hate speech, and we have law enforcement bodies to impose them. On the whole, we don’t trust giant commercial companies to rule over what it is and isn’t acceptable to say, because oh, hello, yes, dystopia.

In the past, public speech was made up of hundreds of book publishers, TV stations, film-makers, and pamphleteers, and no one person or company had the power to censor everything. A book that didn’t fly at one publisher could go to another, and a documentary that the BBC didn’t like could find a home on Channel 4. Why are we happy for essentially two companies – Facebook and Google – to take this power? Why are we demanding that they use it? Why are we giving them justification to use it more, and more, and more?

In response to last week’s criticism about extremist videos on the YouTube, Google UK managing director Ronan Harris said that in 2016 Google removed nearly 2 billion ads, banned over 100,000 publishers, and prevented ads from showing on over 300 million YouTube videos. We are supposed to consider this a good thing. Why? We don't know what these adverts were for. We don't know if they were actually offensive. We don't know why they were banned. 

As it happens, YouTube has responded well to the criticism. In a statement yesterday, Google's EMEA President, Matt Brittin, apologised to advertisers and promised improvements, and in a blog this morning, Google said it is already "ramping up changes". A YouTube spokesperson also tweeted that the platform is "looking into" concerns about LGBT content being restricted. But people want more. The Guardian reported that Brittin declined three times to answer whether Google would go beyond allowing users to flag offensive material. Setting aside Brexit, wouldn't you rather it was up to us as a collective to flag offensive content and come together to make these decisions? Why is it preferable that one company takes a job that was previously trusted to the government? 

Editor’s Note, 22 March: This article has been updated to clarify Paul Joseph Watson’s YouTube content.

Amelia Tait is a technology and digital culture writer at the New Statesman.