Maximum surrender

"Lawrence of Arabia" is back in the cinemas, bigger than ever


David Lean’s 1962 Lawrence of Arabia is back in cinemas this week in a new 4K restoration of the reconstructed version (first seen in its entirety in 1988). No, I didn’t know what 4K meant either so I had to draw on the wisdom of the oracles. (I used a search engine.) It’s the pixels, dummy. 4K  denotes a resolution of approximately 4,000 pixels wide and 2,000 pixels high, compared to the previous standard of 1,920 x 1,080 pixels. That’s more pixels than you’ve got popcorn in your Mega Meal Deal Bucket.

But we need not concern ourselves with pixels. What matters is the new clarity they provide, the familiar spectacles which they render with fresh vividness: Peter O’Toole’s eyes, which are now so alluringly blue that you feel you could dive right through the screen and into those azure peepers, leaving behind only a sand-splash; the tiny orange flame from which Lean cuts to the singed Arabian sunrise. When the wind ripples across the desert, you would swear now that you could make out each individual grain of sand shifting beneath it as if under the writhing of a vast invisible sidewinder.

I’d never seen Lawrence of Arabia on a cinema screen before. And though it’s a cliché to say that seeing it on television isn’t really seeing it all… well, it’s a cliché for a reason. The decades of respect and admiration lavished on Lean’s best-known and most-loved work (here is Steven Spielberg talking about the effect the movie had on him) has had the effect of interring it, as with most films regarded widely as masterpieces. Seeing it at the cinema can only rescue it from its reputation and bring it back to life. (I’d also recommend Kevin Jackson’s thorough and compelling study of the film, in the BFI Classics series, as an après-screening chaser.)

This is a film partly about depth of experience and depth of vision—both literally, in its most famous shot (of Omar Sharif as Sherif Ali riding toward the camera from afar), and figuratively, in its use of a flashback structure which purports, like Citizen Kane, to explain a man who turns out in the final analysis to be beyond mere explanation. So it feels only right that seeing it at the cinema takes a sizable chunk out of one’s own day: once you factor in the overture (how I love overtures, especially at the cinema, where they are now more of an anachronism than in the theatre), an entr’acte and an intermission, you’re looking at four hours, more or less, in the dark.

I’m a big fan of intermissions at the cinema. The ones stipulated by the filmmaker, I mean, rather than those imposed by the management. (I don’t know how widespread the practice was, but I remember the Odeon chain simply halting The Godfather Part III and Dances With Wolves so that one of their employees, who had clearly drawn the short straw that day, could flog some choc-ices from their wearable tray.) Intermissions are only commonplace now for Bollywood films, which are structured with that necessity in mind, but many other movies could really benefit from them. It suits Lawrence of Arabia to have that break approximately two-thirds of the way through; I feel it helps us to register more keenly the change in tone that’s marked by the arrival in the desert of the journalist Jackson Bentley (played by Arthur Kennedy), a fictionalised version of Lowell Thomas. With Bentley’s appearance comes an acknowledgement of the mythologizing process which T E Lawrence underwent, and a slight shift by the picture into a more analytical and contemplative sphere.

The theatrical engagement paves the way for the release of Lawrence of Arabia on Blu-ray. Blu-ray, schmu-ray: see it at the cinema for maximum impact, maximum surrender.

Lawrence of Arabia is on release from Friday.

A portrait of T E Lawrence

Ryan Gilbey is the New Statesman's film critic. He is also the author of It Don't Worry Me (Faber), about 1970s US cinema, and a study of Groundhog Day in the "Modern Classics" series (BFI Publishing). He was named reviewer of the year in the 2007 Press Gazette awards.

Show Hide image

How science and statistics are taking over sport

An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others.

In the mid-1990s, statistics undergraduates at Lancaster University were asked to analyse goal-scoring in a hypothetical football match. When Mark Dixon, a researcher in the department, heard about the task, he grew curious. The analysis employed was a bit simplistic, but with a few tweaks it could become a powerful tool. Along with his fellow statistician Stuart Coles, he expanded the methods, and in doing so transformed how researchers – and gamblers – think about football.

The UK has always lagged behind the US when it comes to the mathematical analysis of sport. This is partly because of a lack of publicly available match data, and partly because of the structure of popular sports. A game such as baseball, with its one-on-one contests between pitcher and batter, can be separated into distinct events. Football is far messier, with a jumble of clashes affecting the outcome. It is also relatively low-scoring, in contrast to baseball or basketball – further reducing the number of notable events. Before Dixon and Coles came along, analysts such as Charles Reep had even concluded that “chance dominates the game”, making predictions all but impossible.

Successful prediction is about locating the right degree of abstraction. Strip away too much detail and the analysis becomes unrealistic. Include too many processes and it becomes hard to pin them down without vast amounts of data. The trick is to distil reality into key components: “As simple as possible, but no simpler,” as Einstein put it.

Dixon and Coles did this by focusing on three factors – attacking and defensive ability for each team, plus the fabled “home advantage”. With ever more datasets now available, betting syndicates and sports analytics firms are developing these ideas further, even including individual players in the analysis. This requires access to a great deal of computing power. Betting teams are hiring increasing numbers of science graduates, with statisticians putting together predictive models and computer scientists developing high-speed software.

But it’s not just betters who are turning to statistics. Many of the techniques are also making their way into sports management. Baseball led the way, with quantitative Moneyball tactics taking the Oakland Athletics to the play-offs in 2002 and 2003, but other sports are adopting scientific methods, too. Premier League football teams have gradually built up analytics departments in recent years, and all now employ statisticians. After winning the 2016 Masters, the golfer Danny Willett thanked the new analytics firm 15th Club, an offshoot of the football consultancy 21st Club.

Bringing statistics into sport has many advantages. First, we can test out common folklore. How big, say, is the “home advantage”? According to Ray Stefani, a sports researcher, it depends: rugby union teams, on average, are 25 per cent more likely to win than to lose at home. In NHL ice hockey, this advantage is only 10 per cent. Then there is the notion of “momentum”, often cited by pundits. Can a few good performances give a weaker team the boost it needs to keep winning? From baseball to football, numerous studies suggest it’s unlikely.

Statistical models can also help measure player quality. Teams typically examine past results before buying players, though it is future performances that count. What if a prospective signing had just enjoyed a few lucky games, or been propped up by talented team-mates? An ongoing challenge for analysts is to disentangle genuine skill from chance events. Some measurements are more useful than others. In many sports, scoring goals is subject to a greater degree of randomness than creating shots. When the ice hockey analyst Brian King used this information to identify the players in his local NHL squad who had profited most from sheer luck, he found that these were also the players being awarded new contracts.

Sometimes it’s not clear how a specific skill should be measured. Successful defenders – whether in British or American football – don’t always make a lot of tackles. Instead, they divert attacks by being in the right position. It is difficult to quantify this. When evaluating individual performances, it can be useful to estimate how well a team would have done without a particular player, which can produce surprising results.

The season before Gareth Bale moved from Tottenham Hotspur to Real Madrid for a record £85m in 2013, the sports consultancy Onside Analysis looked at which players were more important to the team: whose absence would cause most disruption? Although Bale was the clear star, it was actually the midfielder Moussa Dembélé who had the greatest impact on results.

As more data is made available, our ability to measure players and their overall performance will improve. Statistical models cannot capture everything. Not only would complete understanding of sport be dull – it would be impossible. Analytics groups know this and often employ experts to keep their models grounded in reality.

There will never be a magic formula that covers all aspects of human behaviour and psychology. However, for the analysts helping teams punch above their weight and the scientific betting syndicates taking on the bookmakers, this is not the aim. Rather, analytics is one more way to get an edge. In sport, as in betting, the best teams don’t get it right every time. But they know how to win more often than their opponents. 

Adam Kucharski is author of The Perfect Bet: How Science and Maths are Taking the Luck Out of Gambling (Profile Books)

This article first appeared in the 28 April 2016 issue of the New Statesman, The new fascism