Why waste oil burning it when we can use it to make things?

The cost of <em>not</em> switching to renewables.

Grist's David Roberts highlights a really important piece of research by the World Future Council, examining the non-climate-change-related cost of not switching to renewables.

The reasoning is simple: fossil fuels can be burned to make energy, or used as a raw material (e.g. for production of plastics). Every barrel of oil we burn for energy is therefore a barrel which we can't use as a raw material. Thus:

Their burning — whenever they could have been replaced by renewables — is costly capital destruction.

The report concludes that the "future usage loss" resulting from current consumption is between $3.2trn and $3.4trn a year.

Roberts writes that "the exact numbers here are, like numbers in all economic modeling, probably going to turn out to be wrong," and he's definitely right. At first glance, the most important thing absent from the initial paper is no discussion of the difference between present and future value.

This isn't just the problem that resources worth $3.2trn at today's prices might not be worth that at tomorrow's; its also that rigorous economic analysis always discounts the future.

Consider it this way: if you had the option to be paid £100 now or £100 in a year, you would clearly choose the former. The money in the future is less valuable, even though it is nominally the same amount. That's partially because people want things now, of course; but it's also because if you took the £100 now and put it in a savings account, it would be worth more than £100 in 12 month's time. (And let's not even begin on the discussions of how new technology will change the value of fossil fuels as raw materials in ways we can't begin to predict. How will things change, for instance, if conductive plastics take off?)

The same thinking needs to be applied to the question of the "destruction" of potential resources. Their value today — and thus the degree to which they ought to encourage us to switch to renewables — is lower the further into the future we are going to use them.

But really, the discussion of the actual value is slightly moot. Unless we're doing a massive overview of the costs of climate change mitigation — a second Stern report — then we can't properly weigh those costs against all the others. What we can say is that this is an under-discussed benefit of switching to renewable technology sooner rather than later, and of promoting climate change prevention rather than mitigation.

Incidentally, the research also provides a counter-point to the claim that it's not safe to leave fossil fuels in the ground. That's the argument that:

If we build enough renewable energy capacity to supply our entire system, there are still fossil fuels ready to burn. The people who built the renewable capacity may not want to burn them – but what about the next government? Or the next generation?

One option is to prevent future irresponsibility by burning fossil fuels today but with carbon capture and storage, ensuring that the carbon goes back underground. But another option is to switch to renewables and then continue using the fossil fuels for material production, locking up carbon not in vaults underground but in plastics.

In that analysis, even landfills get an image rehabilitation. They become gigantic carbon sinks, encouraging further use of fossil fuels as raw materials, removing more and more potential atmospheric carbon from circulation. There's hope for everyone yet.

Photograph: Getty Images

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

OLIVER BURSTON
Show Hide image

How science and statistics are taking over sport

An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others.

In the mid-1990s, statistics undergraduates at Lancaster University were asked to analyse goal-scoring in a hypothetical football match. When Mark Dixon, a researcher in the department, heard about the task, he grew curious. The analysis employed was a bit simplistic, but with a few tweaks it could become a powerful tool. Along with his fellow statistician Stuart Coles, he expanded the methods, and in doing so transformed how researchers – and gamblers – think about football.

The UK has always lagged behind the US when it comes to the mathematical analysis of sport. This is partly because of a lack of publicly available match data, and partly because of the structure of popular sports. A game such as baseball, with its one-on-one contests between pitcher and batter, can be separated into distinct events. Football is far messier, with a jumble of clashes affecting the outcome. It is also relatively low-scoring, in contrast to baseball or basketball – further reducing the number of notable events. Before Dixon and Coles came along, analysts such as Charles Reep had even concluded that “chance dominates the game”, making predictions all but impossible.

Successful prediction is about locating the right degree of abstraction. Strip away too much detail and the analysis becomes unrealistic. Include too many processes and it becomes hard to pin them down without vast amounts of data. The trick is to distil reality into key components: “As simple as possible, but no simpler,” as Einstein put it.

Dixon and Coles did this by focusing on three factors – attacking and defensive ability for each team, plus the fabled “home advantage”. With ever more datasets now available, betting syndicates and sports analytics firms are developing these ideas further, even including individual players in the analysis. This requires access to a great deal of computing power. Betting teams are hiring increasing numbers of science graduates, with statisticians putting together predictive models and computer scientists developing high-speed software.

But it’s not just betters who are turning to statistics. Many of the techniques are also making their way into sports management. Baseball led the way, with quantitative Moneyball tactics taking the Oakland Athletics to the play-offs in 2002 and 2003, but other sports are adopting scientific methods, too. Premier League football teams have gradually built up analytics departments in recent years, and all now employ statisticians. After winning the 2016 Masters, the golfer Danny Willett thanked the new analytics firm 15th Club, an offshoot of the football consultancy 21st Club.

Bringing statistics into sport has many advantages. First, we can test out common folklore. How big, say, is the “home advantage”? According to Ray Stefani, a sports researcher, it depends: rugby union teams, on average, are 25 per cent more likely to win than to lose at home. In NHL ice hockey, this advantage is only 10 per cent. Then there is the notion of “momentum”, often cited by pundits. Can a few good performances give a weaker team the boost it needs to keep winning? From baseball to football, numerous studies suggest it’s unlikely.

Statistical models can also help measure player quality. Teams typically examine past results before buying players, though it is future performances that count. What if a prospective signing had just enjoyed a few lucky games, or been propped up by talented team-mates? An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others. In many sports, scoring goals is subject to a greater degree of randomness than creating shots. When the ice hockey analyst Brian King used this information to identify the players in his local NHL squad who had profited most from sheer luck, he found that these were also the players being awarded new contracts.

Sometimes it’s not clear how a specific skill should be measured. Successful defenders – whether in British or American football – don’t always make a lot of tackles. Instead, they divert attacks by being in the right position. It is difficult to quantify this. When evaluating individual performances, it can be useful to estimate how well a team would have done without a particular player, which can produce surprising results.

The season before Gareth Bale moved from Tottenham Hotspur to Real Madrid for a record £85m in 2013, the sports consultancy Onside Analysis looked at which players were more important to the team: whose absence would cause most disruption? Although Bale was the clear star, it was actually the midfielder Moussa Dembélé who had the greatest impact on results.

As more data is made available, our ability to measure players and their overall performance will improve. Statistical models cannot capture everything. Not only would complete understanding of sport be dull – it would be impossible. Analytics groups know this and often employ experts to keep their models grounded in reality.

There will never be a magic formula that covers all aspects of human behaviour and psychology. However, for the analysts helping teams punch above their weight and the scientific betting syndicates taking on the bookmakers, this is not the aim. Rather, analytics is one more way to get an edge. In sport, as in betting, the best teams don’t get it right every time. But they know how to win more often than their opponents. 

Adam Kucharski is author of The Perfect Bet: How Science and Maths are Taking the Luck Out of Gambling (Profile Books)

This article first appeared in the 28 April 2016 issue of the New Statesman, The new fascism