10 questions about Cameron’s 'war on porn'

Including, who decides what counts as 'pornography', what happens when people 'opt-in' and what about page 3?

There’s been a bit of a media onslaught from David Cameron about his 'war on porn' over the weekend. Some of the messages given out have been very welcome – but some are contradictory and others make very little sense when examined closely. The latest pronouncement, as presented to/by the BBC, says "Online pornography to be blocked automatically, PM announces".

The overall thrust seem to be that, as Cameron is going to put in a speech: "Every household in the UK is to have pornography blocked by their internet provider unless they choose to receive it."

So is this the 'opt-in to porn' idea that the government has been pushing for the last couple of years? The BBC page seems to suggest so. It suggests that all new customers to ISPs will have their 'porn-filters' turned on by default, so will have to actively choose to turn them off – "and that millions of existing computer users will be contacted by their internet providers and told they must decide whether to activate filters".

Some of this is welcome – the statement about making it a criminal offence to possess images depicting rape, for example, sounds a good idea on the face of it. Such material is deeply offensive, though quite where it would leave anyone who owns a DVD of Jody Foster being raped in The Accused doesn’t appear to be clear. Indeed, that is the first of my ten questions for David Cameron.

1     Who will decide what counts as 'pornography' and how?

And not just pornography, but images depicting rape? Will this be done automatically, or will there be some kind of 'porn board' of people who will scour the internet for images and decide what is 'OK' and what isn’t? Automatic systems already exist to do this for child abuse images, and by most accounts they work reasonably well, but they haven’t eradicated the problem of child abuse images. Far from it. If it’s going to be a 'human' system – perhaps an extension of the Child Exploitation and Online Protection Centre (CEOP) – how are you planning to fund it, and do you have any idea how much this is going to cost?

2     Do you understand and acknowledge the difference between pornography, child abuse images and images depicting rape? 

One of the greatest sources of confusion over the various messages given out over the weekend has been the mismatch between headlines, sound bites, and actual proposals (such as they exist) over what you’re actually talking about. Child abuse images are already illegal pretty much everywhere on the planet – and are hunted down and policed as such. As Google’s spokespeople say, Google already has a zero-tolerance policy for those images, and has done for a while. Images depicting rape are another category, and the idea of making it illegal to possess them would be a significant step – but what about 'pornography'. Currently, pornography is legal – but it comes in many forms, and is generally legal – and too many people have very little to do with either of the first two categories…. which brings me to the third question

3     Are you planning to make all pornography illegal?

…because that seems to be the logical extension of the idea that the essential position should be that 'pornography' should be blocked as standard. That, of course, brings up the first two questions again. Who’s going to make the decisions, and on what basis? Further to that, who’s going to 'watch the watchmen'? The Internet Watch Foundation, that currently 'polices' child abuse images, though an admirable body in many ways, is far from a model of transparency (see this excellent article by my colleague Emily Laidlaw). If a body is to have sweeping powers to control content – powers above and beyond those set out in law – that body needs to be accountable and their operations transparent. How are you planning to do that?

4     What about Page 3?

I assume you’re not considering banning this. If you want to be logically consistent – and, indeed, if you want to stop the "corrosion of childhood", then doing something about Page 3 would seem to make much more sense. Given the new seriousness of your attitude, I assume you don’t subscribe to the view that Page 3 is just 'harmless fun', but perhaps you do. Where is your line drawn? What would Mr Murdoch say?

5     What else do you want to censor?

…and I use the word 'censor' advisedly, because this is censorship, unless you confine it to material that is illegal. As I have said, child abuse images are already illegal, and the extension to images depicting rape is a welcome idea, so long as the definitions can be made to work (which may be very difficult). Deciding to censor pornography is one step – but what next? Censoring material depicting violence? 'Glorifying' terrorism etc?  Anything linking to 'illegal content' like material in breach of copyright? It’s a very slippery slope towards censoring pretty much anything you don’t like, whether it be for political purposes or otherwise. 'Function creep' is a recognised phenomenon in this area, and one that’s very difficult to guard against. What you design and build for one purpose can easily end up being used for quite another, which brings me to another question…

6     What happens when people 'opt-in'?

In particular, what kind of records will be kept? Will there be a 'list' of those people who have 'opted-in to porn'? Actually, scratch that part of the question – because there will, automatically, be a list of those people who have opted-in. That’s how the digital world works – perhaps not a single list, but a set of lists that can be complied into a complete list. The real question is what are you planning to do with that list. Will it be considered a list of people who are 'untrustworthy'. Will the police have immediate access to it at all times? How will the list be kept secure? Will it become available to others? How about GCHQ? The NSA? Have the opportunities for the misuse of such a list been considered? Function creep applies here as well – and it’s equally difficult to guard against.

7     What was that letter to the ISPs about?

You know, the letter that got leaked, asking the ISPs to keep doing what they were already doing, but allow you to say that this was a great new initiative? Are you really 'at war' with the ISPs? Or does the letter reveal that this initiative of yours is essentially a PR exercise, aimed at saying that you’re doing something when in reality you’re not? Conversely, have you been talking to the ISPs in any detail? Do you have their agreement over much of this? Or are you going to try to 'strong-arm' them into cooperating with you in a plan that they think won’t work and will cost a great deal of money, time and effort? For a plan like this to work, you need to work closely with them, not fight against them.

8     Are you going to get the ISPs to block Facebook?

I have been wondering about this for a while because Facebook regularly includes images and pages that would fit within your apparent definitions, particularly as regards violence against women, and Facebook show no signs of removing them. The most they’ve done is remove advertisements from these kinds of pages – so anyone who accesses Facebook will have access to this material. Will the default be for Facebook to be blocked? Or do you imagine you’re going to convince Facebook to change their policy? If you do, I fear you don’t understand the strength of the First Amendment lobby in the US... which brings me to another question

9     How do you think your plans will go down with US internet companies?

All I’ve seen from Google have been some pretty stony-faced comments but for your plan to work you need to be able to get US companies to comply. Few will do so easily and willingly, partly on principle (the First Amendment really matters to most Americans), partly because it will cost them money to do so, and partly because it will thoroughly piss-off many of their US customers. So how do you plan to get them to comply? I assume you do have a plan…

10     Do you really think these plans will stop the 'corrosion' of childhood?

That’s my biggest question. As I’ve blogged before, I suspect this whole thing misses the point. It perpetuates a myth that you can make the internet a 'safe' place, and absolves parents of the real responsibility they have for helping their kids to grow up as savvy, wary and discerning internet users. It creates a straw man – the corrosion of childhood, such as it exists, comes from a much broader societal problem than internet porn, and if you focus only on internet porn, you can miss all the rest.

Plans like these, worthy though they may appear, do not, to me, seem likely to be in any way effective – the real 'bad guys' will find ways around them, the material will still exist, will keep being created, and we’ll pretend to have solved the problem – and at the same time put in a structure to allow censorship, create a deeply vulnerable database of 'untrustworthy people' and potentially alienate many of the most important companies on the internet. I’m not convinced it’s a good idea. To say the least.

Paul Bernal is a lecturer in Information Technology, Intellectual Property and Media Law at the University of East Anglia Law School

This post originally appeared on his blog

David Cameron answers a question during a joint news conference with Italy's Prime Minister Enrico Letta in 10 Downing Street on July 17, 2013. Photograph: Getty Images.

Paul Bernal is a lecturer in Information Technology, Intellectual Property and Media Law at the University of East Anglia Law School

Getty
Show Hide image

There's nothing Luddite about banning zero-hours contracts

The TUC general secretary responds to the Taylor Review. 

Unions have been criticised over the past week for our lukewarm response to the Taylor Review. According to the report’s author we were wrong to expect “quick fixes”, when “gradual change” is the order of the day. “Why aren’t you celebrating the new ‘flexibility’ the gig economy has unleashed?” others have complained.

Our response to these arguments is clear. Unions are not Luddites, and we recognise that the world of work is changing. But to understand these changes, we need to recognise that we’ve seen shifts in the balance of power in the workplace that go well beyond the replacement of a paper schedule with an app.

Years of attacks on trade unions have reduced workers’ bargaining power. This is key to understanding today’s world of work. Economic theory says that the near full employment rates should enable workers to ask for higher pay – but we’re still in the middle of the longest pay squeeze for 150 years.

And while fears of mass unemployment didn’t materialise after the economic crisis, we saw working people increasingly forced to accept jobs with less security, be it zero-hours contracts, agency work, or low-paid self-employment.

The key test for us is not whether new laws respond to new technology. It’s whether they harness it to make the world of work better, and give working people the confidence they need to negotiate better rights.

Don’t get me wrong. Matthew Taylor’s review is not without merit. We support his call for the abolishment of the Swedish Derogation – a loophole that has allowed employers to get away with paying agency workers less, even when they are doing the same job as their permanent colleagues.

Guaranteeing all workers the right to sick pay would make a real difference, as would asking employers to pay a higher rate for non-contracted hours. Payment for when shifts are cancelled at the last minute, as is now increasingly the case in the United States, was a key ask in our submission to the review.

But where the report falls short is not taking power seriously. 

The proposed new "dependent contractor status" carries real risks of downgrading people’s ability to receive a fair day’s pay for a fair day’s work. Here new technology isn’t creating new risks – it’s exacerbating old ones that we have fought to eradicate.

It’s no surprise that we are nervous about the return of "piece rates" or payment for tasks completed, rather than hours worked. Our experience of these has been in sectors like contract cleaning and hotels, where they’re used to set unreasonable targets, and drive down pay. Forgive us for being sceptical about Uber’s record of following the letter of the law.

Taylor’s proposals on zero-hours contracts also miss the point. Those on zero hours contracts – working in low paid sectors like hospitality, caring, and retail - are dependent on their boss for the hours they need to pay their bills. A "right to request" guaranteed hours from an exploitative boss is no right at all for many workers. Those in insecure jobs are in constant fear of having their hours cut if they speak up at work. Will the "right to request" really change this?

Tilting the balance of power back towards workers is what the trade union movement exists for. But it’s also vital to delivering the better productivity and growth Britain so sorely needs.

There is plenty of evidence from across the UK and the wider world that workplaces with good terms and conditions, pay and worker voice are more productive. That’s why the OECD (hardly a left-wing mouth piece) has called for a new debate about how collective bargaining can deliver more equality, more inclusion and better jobs all round.

We know as a union movement that we have to up our game. And part of that thinking must include how trade unions can take advantage of new technologies to organise workers.

We are ready for this challenge. Our role isn’t to stop changes in technology. It’s to make sure technology is used to make working people’s lives better, and to make sure any gains are fairly shared.

Frances O'Grady is the General Secretary of the TUC.