Work isn't working

Families and firms are at war. It will only be won when parents - fathers as well as mothers - can c

The Sex War is over. Girls outperform boys at school and are streaming through higher edu cation. Young women are now taking home the same size wage packets as young men. But the celebrations have to wait. A new, tougher battle has to be fought. It is not a duel between men and women, but between families and firms. This family war will be won only when parents - fathers as well as mothers - can care for their children without dumbing down their careers.

Women now compete with men on a virtually equal footing in both business and politics - but only until the precise moment they become mothers. It is not a question of old-fashioned notions about their capabilities. "Women don't lose out because of outdated views about them as women," says Mary Gregory, an economics lecturer at Oxford University and expert on gender and work. "They lose out because they make different choices about work when they have children." It is not possession of a womb that now holds women back, but its use.

This is fertile political ground, and the Conservatives are beginning to move on to it. David Cameron has proposed that maternity leave should be made transferable, allowing mums and dads to tag-team the childcare, or even take time off together. It is a modest proposal, not least because fathers will only be paid £112 a week (the current statutory maternity pay rate). Labour's John Hutton retorted that few families would be able to afford to make use of such a right. This is true: but why deny those people the possibility?

It is lack of choice that is now the issue. Legislation aimed at tackling direct discrimination, most importantly the Equal Pay Act, has helped to bring about a sea change in employer attitudes and pay scales. Barbara Castle, author and advocate of the Equal Pay Act, must sit beside Keir Hardie, Clement Attlee and Nye Bevan in the Labour pantheon. The latest research from the TUC shows that the gap between the full-time earnings of men and women in their twenties is only 3 per cent. Even this small gap is explained entirely by the very large salaries of a handful of men at the top of the income distribution, which pull up the male average, and the unwillingness of women to pitch for more money. As Gregory suggests, "Women don't ask."

But the good news comes to an end at 30, the age at which the typical married woman has her first child. Children strike women's careers like a meteorite, while glancing almost imperceptibly off fathers' working lives. The pay gap for thirtysomethings is 11 per cent; women in their forties earn 23 per cent less. The picture gets even worse when part-timers are brought into the picture. Female part-timers in their thirties and forties earn only two-thirds as much an hour as male full-timers of the same age. It is motherhood, rather than misogyny, that explains the pay gap. As Gillian Paull from the Institute for Fiscal Studies writes in the latest issue of the Economic Journal: "The 'family gap' in employment and wages - that is, the differences in work behaviour between women without children and mothers - may be more important than the gender gap alone." Meanwhile, men's working hours go up slightly when they become fathers: and dads do better in terms of wages than childless men.

Direct discrimination is no longer the prin cipal enemy. Three structural problems explain the pay gap. First, women and men work in different occupations, with women clustered in less well-paid sectors such as teaching, retail and health care. This occupational segregation has hardly diminished over the past few decades. Second, the significant increase in general wage inequality has had the unfortunate side effect of making the gap between men and women bigger. Third, the penalty paid by women for working part-time after having children has become much more severe, as a high proportion slide down the occupational ladder in what the erstwhile Equal Opportunities Commission termed a "hidden brain drain".

Campaigners for gender equality hope that the Single Equality Act, scheduled for inclusion in this year's Queen's Speech, will force companies to conduct equal-pay audits. It is in fact a forlorn hope, but they should not be too disappointed. As Barbara Petrongolo, a labour specialist at the LSE, says: "Equal treatment policies like equality audits will not have much bite. The problem is not that employers are paying women less for doing the same jobs as men - it is that women are doing different jobs after having children."

Occupational downgrading

A slew of recent studies has dissected the complex data on motherhood and part-time employment. The conclusions highlight the real problems facing British families, and the failure of the labour market to deliver real choice. Most mothers work part-time for some years in order to balance raising their children with staying in the labour market: only a third of mothers with pre-school-age children are in full-time work. A substantial minority - around a quarter - of these end up in a lower-status job: managers become clerical workers. Some professions, such as nursing and teaching, offer most women the chance to go part-time without loss of status or hourly pay. And those women who stay with their current employer are less likely to suffer "occupational downgrading". As Gregory and her co-author Sara Connolly lament: "This loss of career status with part-time work is a stark failure among otherwise encouraging trends for women's advancement."

It is important to be clear what the problem is. Is it bad news that women want to spend time with their children? Surely not, given the evidence for the importance of parental engagement in the early years of a child's life. Are these women "forced" into part-time work, and now just grinning and bearing it? No - the overwhelming majority say they positively chose part-time work, and their job satisfaction is higher than that of mothers working full-time. Most men and women, according to the British Social Attitudes Survey, think that a conventional division of labour is the right one, with mothers taking on the bulk of responsibility for childcare.

We may wish to change these attitudes, but equally we must respect them. The TUC, for example, struggles to take women's choices at face value, declaring: "Women take on a disproportionate share of caring responsibilities due to unequal pay and limited opportunities within the workplace." This presupposes a level of responsiveness to economic incentives that would make Milton Friedman proud. Like it or not, women are doing most of the caring because they see it as part of their role in life. Groundbreaking work by the American economists Rachel Kranton and George Akerlof suggests that being a mother is part of women's identity, and that this explains their otherwise irrational labour-market decisions.

Perhaps the problem is an economic one - the loss of productivity as a result of the underuse of women's skills? This is the argument adopted by many who are urging more government action, but it is a fragile one. The latest TUC report, Closing the Gender Pay Gap, estimates that £11bn a year is being lost. The Women and Work Commission puts the figure at between £15bn and £23bn. A strange, unholy alliance has in fact developed between old-fashioned feminists, who insist women ought to work full-time to gain economic parity with men, and Treasury economists, who worry about the apparently "irrational" squandering of "human capital" by educated women. The principal difference between these allies is that the feminists want to spend billions of pounds of public money on childcare to allow more women to work full-time - the "Swedish option", at which the mandarins generally baulk.

There are, however, grave dangers in relying on economic arguments. For a start, such estimates are notoriously difficult to generate and are open to subjective manipulation (another recent study even found that £5bn is lost each year as a result of bosses' failure to say "thank you" to their staff, which suggests there are easier ways to boost productivity). And even if there really is an economic cost, there may well be a counterbalancing social gain in better-quality family life and happier children.

"Cost" of legislation

Overall, welfare might be greater even if our GDP - the size of which is a source of constant anxiety to male politicians - is somewhat smaller. Employers and their representative bodies are also just as adept at producing studies showing the apparent "cost" of any legislation to help working families - whether it is to introduce a minimum wage, equal pay, better maternity leave or better rights for temporary staff. Equality then becomes a battle of numbers, each side wielding its own semi-fictional cost-benefit analysis. Once we start putting a price tag on equality, we have lost sight of its value.

The problem is not a dent in economic output. The problem is not that mothers reject a life as what the sociologist Heather Hopfl has called that of a "quasi-man". The problem is lack of choice, for women and men alike. Millions of women do not have the option of reducing their hours as well as maintaining their status. And very few men have the option of sharing the childcare responsibilities with their partner. Liberal societies should aim to offer individuals the maximum range of options from which to construct their version of a good life.

"The heart of the choice issue is limited opportunities for women to work part-time in high-quality jobs," says Petrongolo. Gregory agrees: "The crunch question is this - can part-time women continue at the same level?" The one area of dissatisfaction expressed by women working part-time is with their wages. That is not surprising.

Employers are reluctant to retain or hire senior part-timers. While 60 per cent of employers say they would allow a woman returning from maternity leave to switch to part-time status, of these only two-thirds would allow her to remain at the same level of seniority. So, less than half would permit a reduction in hours without loss of status. This may not just be the result of Jurassic attitudes, as Gregory admits: "We can't assume that employers are simply stupid." Assuming it costs as much to hire and train part-timers as full-timers, they will offer a lower return on investment. There may also be co-ordination costs, especially associated with part-time or job-sharing managers. But it is hard to know the true height of these barriers.

Since 2003, employees have had a "right to request" flexible working while firms have had a corresponding duty to take such requests seriously. Some one-off surveys suggest that since the law came into force, one in seven women have made a request, and that most have been accepted. But the Labour Force Survey - the main data source on workplace trends - shows no increase in levels of part-time work over the same period. This puzzles economists. The most likely explanation is that a similar number of requests was being made and granted even before the legislation, and that the law has made little difference. It also looks as if women are asking for part-time work in the sectors where they are most likely to be granted, such as nursing, rather than in the senior and professional jobs where the real problem lies.

Part-timer fathers

It is clear that British families do not want to outsource the raising of their children to others, and prefer to combine paid work and care. At the moment, this means mums, but in the future it could mean dads, too. The model we should be emulating is Holland, where workers have the right to convert a full-time job to a part-time one unless the employer can produce convincing evidence for damage to the firm. "We need to shift the burden of proof from the employee to the employer," insists Gregory. We need to go Dutch, and remove the words "to request" from the right to request flexible working.

It is possible that without the risk of occupational downsliding, more men may also choose to work part-time; but it is also necessary to give men the same freedom as women to take time off for childcare as women. Cameron's idea of transferability is a step in the right direction: it is high time the government stopped deciding for us which parent should raise our children.

Markets are usually good at offering choice, but at present the labour market is failing the family. Companies are not generally acting on the basis of a rigorous business case against senior part-timers. They are exhibiting what psychologists call "path dependency": doing what they do because that's what they've always done. A decisive legislative strike on the Dutch model could jolt them on to a fairer path. Rather than aiming at creating economy-friendly families, it is time to shape a family-friendly economy.

This is the kind of package Labour MPs used to advocate. Indeed, the Commission on Social Justice - under the influence of its deputy chair Patricia Hewitt - proposed just such a move back in 1994. But, in a battle between families and firms, Labour now leans towards the latter. Gordon Brown loves to praise "hard-working families". What families need now is for him to work harder for them.

Working parenthood: by numbers

1/3 of mothers, and one-fifth of fathers, use some form of flexible working pattern

£7,000 average cost of taking a full 12 months off work after the birth of a child

83% proportion of women who want to return to work after having children

1 in 3 proportion of female corporate managers who lose status after having children

94% of all new fathers take some time off after the birth to care for their children

90% of mothers take at least six months' leave

39 number of weeks women are entitled to statutory maternity pay at 90% or less of weekly earnings

2 number of weeks men are entitled to paternity leave (pay negotiable)

Research: Simon Rudd

This article first appeared in the 24 March 2008 issue of the New Statesman, The truth about Tibet

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge