2020 job market projected to push poverty even higher

Tackling poverty means tackling the weak job market

Research we publish today looks at the impact of the projected job market in 2020 on poverty in the UK. Unfortunately, it’s more bad news. The implication is that we should target jobs and training assistance on the basis of household, not just individual, need and focus unerringly on the creation of more and better jobs.

The research uses a forecast of the type of job market we expect to have in 2020 and combines this with a model of household incomes that includes announced tax and benefit changes. The central forecast for 2020 is for many long-term trends to continue, including shifts towards a knowledge- and service-based economy and increases in high- and low-paid jobs. We already know that cuts to benefits and Tax Credits are likely to undermine the beneficial effects of Universal Credit. This will lead to (in combination with demographic and earnings change) rising poverty rates over the rest of the decade. Adding in an estimate of changes in the job market increases inequality further, although it does offset some of the rise in absolute child poverty.

(Click for a larger version)

So, changes to taxes, benefits, demography and earnings (the blue bars) increase absolute child poverty in 2020 by just over 6 per cent but job market changes (the red bar) offset this a tad. Turning to the relative measure, tax and benefit changes raise poverty by around 5 per cent and the projected job market adds another 1 per cent by 2020. All groups except households headed by someone aged over 65 see rising absolute and relative poverty from tax and benefit changes, with lone parents hit particularly hard. Employment change makes things worse for everyone except for absolute poverty among families with children.

We weren’t naive enough to expect the central forecast to eradicate poverty, so the plan was then to try out some different scenarios that JRF, the research team and our advisory group thought might have a positive impact. These variations were all based on changing the distribution (but not increasing the number) of jobs, and we didn’t vary the tax and benefit system. The second chart shows the impact of some of these scenarios on relative child poverty rates (the long bar shows the predicted 2020 rate of 25.7 per cent).

None of the alternative scenarios (the short bars) have any meaningful impact on that central child poverty projection. Keeping the employment structure as it is now would decrease poverty by a tiny 1.2 per cent. This is the biggest difference. A general rise in qualification levels across the workforce and reduced pay for the highest qualified, for example, actually increases child poverty more than in the central forecast (by 1.0 per cent). Most other scenarios have virtually zero effect by 2020.

(Click for a larger version)

There are two core reasons for this disappointing lack of impact. The first is that low paid and poorly qualified workers, along with women and part time workers, are spread across the whole household income distribution. This means targeting these workers is not an especially effective way of targeting poverty. The second is the huge ‘drag’ on poverty rates of the large number of workless households in the UK.

What do we do about these worrying findings? It is clear that interventions such as training and skills development need to be targeted on the basis of household need, not just individual need if we are to have a serious impact on poverty. It is also clear that we need more jobs. A lot more, because the 1.5 million new jobs included in these forecasts is going to be nowhere near enough when 6 million people in the UK are currently seeking more work.

A child in the Gorton estate in Manchester, where 27% of children live under the poverty line. Photograph: Getty Images

Chris Goulden is the poverty programme manager at the Joseph Rowntree Foundation.

OLIVER BURSTON
Show Hide image

How science and statistics are taking over sport

An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others.

In the mid-1990s, statistics undergraduates at Lancaster University were asked to analyse goal-scoring in a hypothetical football match. When Mark Dixon, a researcher in the department, heard about the task, he grew curious. The analysis employed was a bit simplistic, but with a few tweaks it could become a powerful tool. Along with his fellow statistician Stuart Coles, he expanded the methods, and in doing so transformed how researchers – and gamblers – think about football.

The UK has always lagged behind the US when it comes to the mathematical analysis of sport. This is partly because of a lack of publicly available match data, and partly because of the structure of popular sports. A game such as baseball, with its one-on-one contests between pitcher and batter, can be separated into distinct events. Football is far messier, with a jumble of clashes affecting the outcome. It is also relatively low-scoring, in contrast to baseball or basketball – further reducing the number of notable events. Before Dixon and Coles came along, analysts such as Charles Reep had even concluded that “chance dominates the game”, making predictions all but impossible.

Successful prediction is about locating the right degree of abstraction. Strip away too much detail and the analysis becomes unrealistic. Include too many processes and it becomes hard to pin them down without vast amounts of data. The trick is to distil reality into key components: “As simple as possible, but no simpler,” as Einstein put it.

Dixon and Coles did this by focusing on three factors – attacking and defensive ability for each team, plus the fabled “home advantage”. With ever more datasets now available, betting syndicates and sports analytics firms are developing these ideas further, even including individual players in the analysis. This requires access to a great deal of computing power. Betting teams are hiring increasing numbers of science graduates, with statisticians putting together predictive models and computer scientists developing high-speed software.

But it’s not just betters who are turning to statistics. Many of the techniques are also making their way into sports management. Baseball led the way, with quantitative Moneyball tactics taking the Oakland Athletics to the play-offs in 2002 and 2003, but other sports are adopting scientific methods, too. Premier League football teams have gradually built up analytics departments in recent years, and all now employ statisticians. After winning the 2016 Masters, the golfer Danny Willett thanked the new analytics firm 15th Club, an offshoot of the football consultancy 21st Club.

Bringing statistics into sport has many advantages. First, we can test out common folklore. How big, say, is the “home advantage”? According to Ray Stefani, a sports researcher, it depends: rugby union teams, on average, are 25 per cent more likely to win than to lose at home. In NHL ice hockey, this advantage is only 10 per cent. Then there is the notion of “momentum”, often cited by pundits. Can a few good performances give a weaker team the boost it needs to keep winning? From baseball to football, numerous studies suggest it’s unlikely.

Statistical models can also help measure player quality. Teams typically examine past results before buying players, though it is future performances that count. What if a prospective signing had just enjoyed a few lucky games, or been propped up by talented team-mates? An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others. In many sports, scoring goals is subject to a greater degree of randomness than creating shots. When the ice hockey analyst Brian King used this information to identify the players in his local NHL squad who had profited most from sheer luck, he found that these were also the players being awarded new contracts.

Sometimes it’s not clear how a specific skill should be measured. Successful defenders – whether in British or American football – don’t always make a lot of tackles. Instead, they divert attacks by being in the right position. It is difficult to quantify this. When evaluating individual performances, it can be useful to estimate how well a team would have done without a particular player, which can produce surprising results.

The season before Gareth Bale moved from Tottenham Hotspur to Real Madrid for a record £85m in 2013, the sports consultancy Onside Analysis looked at which players were more important to the team: whose absence would cause most disruption? Although Bale was the clear star, it was actually the midfielder Moussa Dembélé who had the greatest impact on results.

As more data is made available, our ability to measure players and their overall performance will improve. Statistical models cannot capture everything. Not only would complete understanding of sport be dull – it would be impossible. Analytics groups know this and often employ experts to keep their models grounded in reality.

There will never be a magic formula that covers all aspects of human behaviour and psychology. However, for the analysts helping teams punch above their weight and the scientific betting syndicates taking on the bookmakers, this is not the aim. Rather, analytics is one more way to get an edge. In sport, as in betting, the best teams don’t get it right every time. But they know how to win more often than their opponents. 

Adam Kucharski is author of The Perfect Bet: How Science and Maths are Taking the Luck Out of Gambling (Profile Books)

This article first appeared in the 28 April 2016 issue of the New Statesman, The new fascism