The coalition's cap on benefit increases will mean a surge in child poverty

Raising benefits by less than the rate of inflation is a poverty-producing policy.

Internal Treasury documents do not make for a thrilling bedtime read but a flick last night through the government’s Impact Assessment (IA) toolkit proved quite instructive. It tells us, for example, that an IA should be prepared when a proposal "involves some kind of redistribution affecting the public, private or third sector", and that an IA "must be published when a Government Bill… is introduced into either House of Parliament".

Yet on the day the Welfare Benefits Uprating Bill 2012 receives its second reading in Parliament, we still have not seen a formal assessment of the government’s decision to cut an estimated 4 per cent from the real value of key benefits over the next three years.  So, in the absence of any official statement as to how this policy will affect child poverty, we decided to work it out for ourselves.

Our starting point is the study produced by the Institute for Fiscal Studies (IFS) in October 2011 projecting child poverty rates for the UK over the next five to ten years. The picture, according to this report, looked bleak: an estimated 400,000 more children would be living in relative poverty by the end of the current parliament, while the number living in absolute poverty looked set to increase by 500,000 over the same period.  

Critically, the IFS singled out the decision to index most working-age benefits to the consumer price index (CPI) as opposed to the more generous retail price index (RPI) from 2011 onwards as the most significant policy driving child poverty upwards in the next five to ten years. But these projections do not now tell the full story. Since they were produced, the government has made other adjustments to the way it indexes benefits and tax credits, and now plans to add into this already potent brew the decision to uprate most in- and out-of work benefits, and going forward key elements of Universal Credit (UC), at a sub-inflation 1 per cent for three years.

As our new report published yesterday shows, the simple truth is that a sub-inflation uprating will be a poverty-producing policy. Delinking benefits from prices will result in a fall in the real standard of living for anyone who is reliant on the state for all or part of their income over the next three years. As a consequence, in the absence of any compensatory changes, the number of children living in absolute poverty will rise, while those children in families reliant on out-of-work benefits who already live below this threshold will see their poverty deepen further.

And alongside worsening absolute poverty rates, the relative fortunes of low income families can only deteriorate too. The government is presenting the 1 per cent uprating as ‘fair’ in light of the average earnings levels observed during the recession, as well as future public sector pay agreements. But what is conveniently obscured in this debate is that for many years prior to 2008, benefits rose at a significantly lower level than wages. In fact, the above-average earnings upratings of the last five years have had limited effect on the relative value of benefits eroded over a long period of time, showing how difficult it is to correct the damage done by year after year of under-indexation.

Nor is it clear where the equity is in pegging benefits to public sector pay rises going forward. With the Office for Budget Responsibility anticipating average earnings growth for the whole economy of between 2.2 per cent and 3.9 per cent over the next three years, the Uprating Bill will open up a gap between the poorest and the rest of the population. As a result, the minority will become further disconnected from the majority, and under these conditions, relative child poverty can but rise.

Looking at the historical picture should make us all pause for thought. Decoupling benefit levels from wages is widely recognised as the most significant policy that drove the dramatic increases in child poverty through the 1980s and 1990s, and the decision now to delink benefits from prices looks set to propel child poverty back up to levels we haven’t observed since the Thatcher years.

Given this, the Uprating Bill risks history repeating itself, with one significant difference: this time round we are likely to witness significant rises in child poverty against the backdrop of the Child Poverty Act (CPA) 2010, a law which requires the government take action to improve both the absolute and the comparative fortunes of children growing up in the UK today.

Yet three years of benefit uprating that is linked to neither prices nor average earnings will deliberately lock in both real and relative losses for low-income families, at the same time as locking them out of the mainstream.

Small wonder, then, that the required impact assessment has yet to materialise, but when it does, it will be interesting to see how the government squares the child poverty circle.

A young girl spends the half term school holiday playing in an an alleyway in the Gorton area of Manchester. Photograph: Getty Images.

Alison Garnham is chief executive of the Child Poverty Action Group

OLIVER BURSTON
Show Hide image

How science and statistics are taking over sport

An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others.

In the mid-1990s, statistics undergraduates at Lancaster University were asked to analyse goal-scoring in a hypothetical football match. When Mark Dixon, a researcher in the department, heard about the task, he grew curious. The analysis employed was a bit simplistic, but with a few tweaks it could become a powerful tool. Along with his fellow statistician Stuart Coles, he expanded the methods, and in doing so transformed how researchers – and gamblers – think about football.

The UK has always lagged behind the US when it comes to the mathematical analysis of sport. This is partly because of a lack of publicly available match data, and partly because of the structure of popular sports. A game such as baseball, with its one-on-one contests between pitcher and batter, can be separated into distinct events. Football is far messier, with a jumble of clashes affecting the outcome. It is also relatively low-scoring, in contrast to baseball or basketball – further reducing the number of notable events. Before Dixon and Coles came along, analysts such as Charles Reep had even concluded that “chance dominates the game”, making predictions all but impossible.

Successful prediction is about locating the right degree of abstraction. Strip away too much detail and the analysis becomes unrealistic. Include too many processes and it becomes hard to pin them down without vast amounts of data. The trick is to distil reality into key components: “As simple as possible, but no simpler,” as Einstein put it.

Dixon and Coles did this by focusing on three factors – attacking and defensive ability for each team, plus the fabled “home advantage”. With ever more datasets now available, betting syndicates and sports analytics firms are developing these ideas further, even including individual players in the analysis. This requires access to a great deal of computing power. Betting teams are hiring increasing numbers of science graduates, with statisticians putting together predictive models and computer scientists developing high-speed software.

But it’s not just betters who are turning to statistics. Many of the techniques are also making their way into sports management. Baseball led the way, with quantitative Moneyball tactics taking the Oakland Athletics to the play-offs in 2002 and 2003, but other sports are adopting scientific methods, too. Premier League football teams have gradually built up analytics departments in recent years, and all now employ statisticians. After winning the 2016 Masters, the golfer Danny Willett thanked the new analytics firm 15th Club, an offshoot of the football consultancy 21st Club.

Bringing statistics into sport has many advantages. First, we can test out common folklore. How big, say, is the “home advantage”? According to Ray Stefani, a sports researcher, it depends: rugby union teams, on average, are 25 per cent more likely to win than to lose at home. In NHL ice hockey, this advantage is only 10 per cent. Then there is the notion of “momentum”, often cited by pundits. Can a few good performances give a weaker team the boost it needs to keep winning? From baseball to football, numerous studies suggest it’s unlikely.

Statistical models can also help measure player quality. Teams typically examine past results before buying players, though it is future performances that count. What if a prospective signing had just enjoyed a few lucky games, or been propped up by talented team-mates? An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others. In many sports, scoring goals is subject to a greater degree of randomness than creating shots. When the ice hockey analyst Brian King used this information to identify the players in his local NHL squad who had profited most from sheer luck, he found that these were also the players being awarded new contracts.

Sometimes it’s not clear how a specific skill should be measured. Successful defenders – whether in British or American football – don’t always make a lot of tackles. Instead, they divert attacks by being in the right position. It is difficult to quantify this. When evaluating individual performances, it can be useful to estimate how well a team would have done without a particular player, which can produce surprising results.

The season before Gareth Bale moved from Tottenham Hotspur to Real Madrid for a record £85m in 2013, the sports consultancy Onside Analysis looked at which players were more important to the team: whose absence would cause most disruption? Although Bale was the clear star, it was actually the midfielder Moussa Dembélé who had the greatest impact on results.

As more data is made available, our ability to measure players and their overall performance will improve. Statistical models cannot capture everything. Not only would complete understanding of sport be dull – it would be impossible. Analytics groups know this and often employ experts to keep their models grounded in reality.

There will never be a magic formula that covers all aspects of human behaviour and psychology. However, for the analysts helping teams punch above their weight and the scientific betting syndicates taking on the bookmakers, this is not the aim. Rather, analytics is one more way to get an edge. In sport, as in betting, the best teams don’t get it right every time. But they know how to win more often than their opponents. 

Adam Kucharski is author of The Perfect Bet: How Science and Maths are Taking the Luck Out of Gambling (Profile Books)

This article first appeared in the 28 April 2016 issue of the New Statesman, The new fascism