Our ash trees are dying, but don't despair: catastrophes are natural events in the lives of trees

Dutch elm disease is a tragic thing to watch, but we shouldn't be too gloomy. Woody vegetation responds, adapts, regroups. What emerges in its recovery stage may not be the same as before, but it will always be a vital, dynamic, arboreal community.

These spring days I gaze out of my study with a mite of foreboding, waiting for a premature and possibly terminal autumn. Rooted in the bank of the ancient pond beyond the window are two multi-trunked ashes, airy, sprawling trees, which together form a canopy stretching 20 metres across. They’re fine at the moment, but just a few miles further north is the wood (its name, Ashwellthorpe, now seems an eerie black joke) where ash dieback first appeared in the wild. All winter the gales have been blowing Chalara spores southwards and it’s almost inevitable that the fungus will reach our garden. If these two trees are smitten, it will change the whole feel of our home patch.

They have a spaciousness that ashes rarely have the chance to reach on narrow hedgebanks or in the tight ranks of woods. They’re amphitheatres for flocks of birds, vast and dramatic weathervanes. Ash branches are elastic, and when they flail in the wind it is as if waves of wood are breaking across the garden. Losing them would bring not just a transformation of our view, but an unsettling shift in our sense of what constitutes a landscape, and what it contributes to our sense of home and security.

Much of Britain waits for the coming summer in a similar mood, wondering what the country will look like without our third-commonest tree. Ash doesn’t have the craggy grandeur of oak or the voluptuous grace of beech. It’s short-lived, usually collapsing at about 200 years, unless it’s been coppiced or pollarded. Its pale trunks and filigree leaves, and a habit of regenerating in dense colonies, make it an often unnoticed choral background in woods, a visual hum behind the strong timbres of the big trees. But it’s this quality that we love in it, that quiet, pale, graceful, background presence. Woods will, for a while, look emptied of depth if the disease hits badly, and in hedgerows they make up a tenth of all mature trees. Most of the older individuals are pollards, low-slung and often cloaked with second-storey thickets of ivy, so these, too, are easily passed by, unremarked. We will notice their absence, if and when they go.

But catastrophising (entirely understandable after Dutch elm disease) is not a helpful response to threats to trees, and our anxious concern for them is easily trumped by our ignorance of their survival skills and community life. So, back home, I take a dispassionate surveyor’s tour of the garden, trying to imagine what it will be like if these two great sheaves of wood, and half a dozen younger trees, succumb. Close to, the portents don’t look so bad. Our ashes are surrounded (as they are in many places) by thorn trees, burgeoning self-sown oaklings, suckering wild cherries. In ten years’ time the gaps they will leave, if struck down, will have closed up, and the ashes will be metamorphosing into complex catacombs of decaying wood, full of beetles and woodpecker probings.

We have a cultural block against looking at trees like this, as dynamic and evolving vegetation. We want them to stay exactly as and where they are, and don’t entirely believe in either their powers of self-generation or their afterlife. In an unstable world they’ve become monuments to security, emblems of continuity and peacefulness. We hug them, plant them as civic gestures and acts of reparation, give them pet names. When this cosy relationship is turned upside down – as it was, for instance, during the great storm of October 1987 – we are shipwrecked, wondering if we’ve been bad guardians, not protected them enough. “Trees are at great danger from nature,” warned the Tree Council after that storm – in an extraordinary solecism that seemed to place the arboreal rep­ublic entirely inside the kingdom of man. Very rarely do we ask whether we might have mothered them too much.

When Chalara struck the UK in 2012 it was clearly, in part, a breakdown in proper stewardship. The general public (and a good number of landowners) had never heard of the disease, but woodland ecologists and commercial foresters had been nervously tracking its inexorable westward march across Europe since the mid-1990s. Some urged the government to impose greater restrictions on the importation of ash saplings, but most had few ideas about how to interpret or react to it. That is not surprising. The fungus, now known as Chalara fraxinea, is biologically mysterious, an entirely new organism of uncertain origins. It probably evolved in eastern Asia, where it appears to be harmless to native ash species. Its ancestor is a benign and widespread leaf fungus called Hymeno­scyphus albidus, native even in the UK. But at some recent date, this threw up a mutant, Hymenoscyphus pseudoalbidus, with slight genetic differences but a terrible virulence. The windblown spores infect ash foliage in spring, turning the leaf-tips brown. The fungal “roots” (hyphae) spread through the leaf stalks into the branches and trunk, blocking off the tree’s water supply. Diamond-shaped lesions appear on the trunk and the leaves turn brown and wilt. Young trees can die within year, but older ones appear able to survive for much longer. The fungus forms its spores in the leaf litter in summer and these are dispersed in the wind over the following months. This is effective at spreading the disease over relatively short distances, but wind dispersal is limited by the fact that the spores survive in the air for only a few days. In Norway, Chalara has moved by between 20 and 30 kilometres a year.

The first European cases were recorded in Poland in 1992. It had reached Lithuania by 1994 and then moved west and north, arriving in Italy, France and the Netherlands between 2007 and 2010. In Denmark the susceptibility of trees proved to be almost total, with not much more than 1 per cent of Danish ashes left alive since the disease first arrived there in 2003.

It was this frightening, epidemic contagiousness that caused such alarm and confusion when Chalara was spotted in Britain, first on nursery saplings imported from Holland, then on wild trees, which it can only have reached on the wind. Fantastical statistics were banded about in the media – that 30 per cent of all Britain’s trees were ashes and that, with a host of other tree diseases already established here, we were facing a dead and denuded landscape, like the Somme after the Great War. In fact, ashes make up a little over 5 per cent of our tree cover in Britain and they are highly diverse genetically. The consequences of this variability, in terms of disease susceptibility, are already making themselves shown in Poland, the first country to be hit. Between 10 and 25 per cent of Polish ashes are showing some level of natural immunity. In closely monitored populations in Lithuania, 10 per cent of trees have survived infection for eight years and appear to be able to pass this resistance on to their offspring.

Natural resistance is likely to be the best hope for the survival of a core population of ashes in the UK. Isolated from the continent for nearly 8,000 years, our trees may be more genetically diverse than those in Poland. For example, ashes that thrive in the sparse clitter of Yorkshire limestone are quite distinct from the tall poles that grow in damp East Anglian loams, and neither will survive if transplanted to the other habitat. Many ashes have male and female branches (and therefore flowers) on the same tree, so the potential for complex cross-pollination and extreme genetic variation is high. It’s a relief that for once the government has listened to its scientists and based its response on giving time and space for natural resistance to appear, and then capitalising on it, if need be, with cross-breeding. Sanitation felling, which was talked about in the first wave of panic, would have been worse than useless, doing the disease’s work for it by eliminating potentially resistant trees and throwing more dormant spores into circulation.

But this laissez-faire approach isn’t much liked. The public cry is for “something to be done”, for the excoriation of scapegoats in what is as much a natural event as a bureaucratic disaster, for raising the barricades, conjuring up a new woodland estate for the next generation. How have we come to regard trees like this? As human products or, worse, dependent arboreal children, capable of appearing only if we artificially inseminate the ground. As vulnerable to abuse from outside agencies (“nature” or nasty foreign organisms), but never from ourselves, and best put out of their misery if they become ill or old. Understanding how these stereotypes and attitudes originated, and what perpetuates them today, is crucial if we are to make a proper cultural response to, and an accommodation with, ash dieback, and with the many other diseases that are likely to affect our trees in the decades to come.

The 17th-century jobbing journalist, whimsical gardener and discreet royalist John Evelyn is most often credited with popularising the idea of trees as property, as status symbols, models of order, heritable goods, investments with a guaranteed growth rate. His book Sylva, or a Discourse of Forest-Trees and the Propagation of Timber in His Majesty’s Dominions (1664) was publicly commissioned in response to a largely imaginary naval wood shortage, but its hidden agenda was to provide a manifesto for a tree-farming bonanza, a wood rush. Evelyn, plucking figures from the air, boasted to the king that his book had been “the sole occasion of furnishing for your almost exhausted dominion with more than two millions of timber trees”, or maybe just one million, he suggested in the second edition. But plantation fever was really ignited in the next century, by German forest science. Forstwissenschaft was steering the growing of timber towards a mathematically precise system, with the trees (increasingly conifers rather than “hearts of oak”) spaced and organised for maximum productivity.

This devotion to order and tidiness entered the aesthetics of landscape, too, and planners such as Capability Brown, whose regimented parks are still mystifyingly seen as epitomes of rural beauty, made fortunes. Not all his contemporaries admired him. The radical landowner Richard Payne Knight urged the people to vandalise Brown’s landscapes to release the incipient wildwood. Uvedale Price, the doyen of the picturesque movement, was more subtly scathing of Brown’s trademark – the clump, a kind of woodland canapé: “the clump”, Price wrote, “a name, which if the first letter were taken away, would most accurately describe its form and effect . . . from the trees being generally of the same age and growth, from their being planted nearly at the same distance in a circular form, and from each tree being equally pressed by his neighbour, [they] are as like each other as so many puddings turned out of a common mould”.

For their part, ordinary rural people were mystified by the need for plantations, having lived for thousands of years with woods that renewed themselves spontaneously and indefinitely by seeding, or by regrowth from cut coppice stools and pollards. In place of this system of natural regeneration came the notion of trees as artefacts, biddable machines for the production of timber, programmed at every stage of their lives from planting to cutting.

The fundamental grammar of our relationship with them had been changed. Previously, “growing” had been an intransitive verb in the language of woods. Trees grew, and we, in a kind of subordinate clause, took things from them. In the forest-speak of the Enlightenment, “growing” became a transitive verb. We were the subject and trees the object. We were the cause of their existence in particular places on the earth.

In our own time, the idea that trees have reproductive processes of their own has almost disappeared from our cultural memory. Instead, we like to believe that they can begin their lives only if we deliberately instal them – an idea that feeds off our ecological guilt. Tree-planting has become the great ritual of atonement, a way of making painless amends for the devastation our species had wreaked across the planet. It is a perfect symbol of procreation: the penetration of the soil, the implantation of new life, the years of aftercare and cosseting. This is the way to repair the earth, nudge it towards renewed vitality, without in any way surrendering our authority over it. Human beings know best, make better parents than nature.

Now, in the extremities of ash dieback, we can see that decades of well-intentioned planting have been not only often unnecessary, but, quite possibly, dangerous. Runtish saplings, often mislabelled and of unknown provenance, are shoved into the ground, regardless of whether they might be vectors for disease, or whether the soil is right and the site appropriate. Often they are so completely misplaced (compared to the exact choices that have been made by successful, naturally sprung seedlings) that they soon expire, even with the tree equivalent of intensive care. Those plantations that do survive into maturity are sometimes as conducive to epidemic disease as hospital wards. The trees are too closely packed, too evenly aged, too genetically uniform.

Natural regeneration isn’t universally appropriate. Trees don’t easily establish themselves in thick grassland, for instance, or, necessarily where we want them. But in most situations they are irrepressible (oak, ash and birch especially), witness how much time is spent hacking them down, in gardens, in nature reserves, on road verges and heathland. Nor, when it is successful, is the result often recognised as young or nascent woodland. It’s written off under that pejorative term, “scrub”. This transient vegetation, full of wild rose, brambles and thorn bushes, which act as protection for broadleaved saplings in their vulnerable early years, is another demonised natural form. The landscape architect Nan Fairbrother was dismissive of it in her influential 1970 book, New Lives, New Landscapes. “Incipient scrub always lurks,” she wrote, “only temporarily suppressed: it is the state of original sin in our landscape.”

In this dystopian vision – the opposite of the Romantic ideal of the immemorial wildwood – woodland ravaged by disease, mugged by alien squirrels and bashed about by un-British extremes of weather could only survive with continuous human vigilance. Scrub, the recovery mode, was not woodland in the making; it was the threatening new climax vegetation, the bleak future of unmanaged England.

This patronising desire for human control, an insistence that trees’ natural growth should conform to our current cultural stereotypes, pesters them at every stage of their life. Middle-aged trees, which lack a commercially viable uprightness, or are ruffed with low branches, are referred to as “rubbish”, or examples of “inadequate management”, as economic and aesthetic judgements meld. We have a soft spot for the truly old, for those gnarled and hollow hulks that inhabit Arthur Rackham’s drawings and, in the real world, ancient wood-pastures such as the New Forest. But we don’t like either the circumstances or the stages that are necessary to generate these awesome wooden monuments. Gale damage, fungal invasions, lightning strikes, repeated defoliation by insects are all regarded with distaste, and as intrinsically inimical to a tree’s “health”.

In fact, trees deal with stress and disease and ageing much better than we do. Oaks easily recover from complete defoliation by insects within a month or two. “Stag-heading”, where a fork of dead branches protrudes above a reduced crown of leaves, is looked on with horror as some kind of illness, a malformation that transgresses our fixed ideas of what a tree “should” look like. Yet it is an almost universal adaptation, a budgeting move that trees make when they need to economise with water. What looks like decrepitude in an old tree is often a sign of a state of calm senescence that can last for centuries. Trees knocked flat by gales can survive perfectly well in a horizontal position.

Oliver Rackham tells a cautionary story about the ancient beeches in the Pindus Mountains in northern Greece. Pindus is a remote area, far from any industrial centres, but in the 1980s the beeches showed all the signs of another fashionable affliction – Waldsterben, forest death from acid rain. The symptoms were the familiar ones of stress: dieback at the tops of trees, leaf-yellowing, early leaf fall. But many of the huge trees were then more than 300 years old. They were covered in luxuriant lichens, a sure sign of unpolluted air. And their narrow annual rings showed they had been in this state of retrenchment for centuries. Reduced vitality was the reason for their survival.

There are, of course, degrees of stress that trees can’t survive – prolonged drought, epidemics such as elm disease and the Asian canker that devastated sweet chestnuts in America. But bufferings by the weather, minor ailments, limb loss, have been the circumstances of their evolution, and they’re well able to cope. It is worth considering what a wood unaffected by any of the organisms we regard as hostile would be like. There would be no insects, and therefore no birds. No lichens or toadstools or intriguing hollows. It would not be an ecosystem at all, just barren rows of leaves on poles.

I’m not immune to the lure of ideal woodland images myself. In the 1970s and 1980s I owned a 16-acre ancient wood in the Chil­terns, which I ran as a community project. But it was also a private playground and I got to understand very well the seductive licence to control that ownership grants you. I behaved like a matronly gardener at times, clearing brambles around my pet flowers and clipping twigs to give seedling trees more light. I thought I knew what I was doing at the time, but I’m not so certain now.

In one sense, I was entirely selfish. I wanted the wood to be my kind of wood, to my taste, yet I wanted it to be ecologically sound, too. I nipped and tucked the vegetation round our primrose clumps so they would make a better show. I ring-barked the spongy, alien poplars to provide a bit of standing deadwood. But when they started to topple over, years later, I nudged them into a position where they’d take down other trees’ branches with them, in a wildwood-circus tumbling act. I lopped sycamores that were shading out ash, and ashes that were shading beechlings, as if I had certain know­ledge of the proper hierarchy in trees. And I remember the excuses I made to myself for gardening in a place that was supposed to be halfway wild. I was simply speeding up its progress towards a more “natural” state. I was doing no more than would be done by a localised wind, or a tribe of bark-beetles making a corner of the wood commodious for themselves. I was part of nature myself, for heaven’s sake, deserving of a niche along with the rest of them.

This was true, but disingenuous. Finding a balance between affectionate engagement and overbearing management is a philosophical and practical conundrum that needs a different approach in every situation; the challenge presented by ash dieback is no exception. But there are precedents, such as the great storm of 1987, which toppled 15 million trees in a matter of five hours. An important lesson which emerged from that event was the folly of rushed or aggressive action. There was more damage caused by reckless clearing up after the storm than by the wind. Still-living trees, millions of seedlings and even the topsoil were swept away by bulldozers in many woods, in response to political pressure and the public distaste for what appeared to be “untidiness”.

The contrast between the miserable replanting in these areas and the spectacular regrowth in areas left completely alone is a lesson that has been absorbed by conser­vation organisations, but not yet into our civic culture.

That favourite GP’s phrase – “watchful waiting” – is also appropriate. There is still much to learn about the Chalara fungus, about, for instance: its speed of spread and which ages of trees are most susceptible. The detection, and protection, of trees that seem to be resistant must be the highest priority. So, wherever issues of safety aren’t important, should the preservation of larger trees that succumb. A “dead” tree is still a tree, and provides a rich habitat for birds, insects, fungi and mosses.

The existence of a large population of indigenous ashes is our best safeguard for the future and makes rather baffling the Forestry Commission’s experiment, initiated early in May, of planting out trial plots with 150,000 saplings of “15 different varieties”. The intention is to discover whether a few may be resistant and eventually propagate from them. But as 80 million ashes from probably ten times that number of genotypes are already engaged in just such an experiment across Britain, it is hard to see this as much more than a PR exercise – one that fits tidily in to our long, hubristic belief that the salvation of trees lies with us and our superior arboreal intelligence only. Beyond that, the encouragement of much more diverse, self-regenerating and uneven-aged woodlands – even where these are non-native (such as sycamore, sweet chestnut and turkey oak) – is the best contribution we can make.

The sycamore is currently demonised as an “invasive alien”, introduced to Britain some time in the late Middle Ages (though it is quite possibly an indigenous species given to erratic and untypical behaviour for a native because of its own fungal affliction, tar spot). But we will need to make an accommodation with it as perhaps the best natural coloniser of bare patches that is available. It can’t host many of the insects that have co-evolved with ash over thousands of year, but it will be a partial refuge for the lichens which are ashes’ outstanding familiars, and restore a general ambient woodiness. Climate change is making the categories of native and non-native increasingly fuzzy and we may find ourselves grateful for some immigrant biodiversity.

Above all, the lesson of the storm was that catastrophes – be they disease, or climatic trauma, or insect predation – are entirely natural events in the lives of trees and woods. Woody vegetation responds, adapts, regroups. What emerges in its recovery stage may not be the same as before, but it will always be a vital, dynamic, arboreal community. The same process will happen with ash, perhaps more quickly than we think.

Copyright Richard Mabey. The Ash and the Beech by Richard Mabey was published by Vintage Books on 6 June (£9.99).

Waving, not drowning: we take the palely unobtrusive ash for granted and yet strain to impose unnatural order on the tangle of our forests. Photograph: Getty Images.

This article first appeared in the 03 June 2013 issue of the New Statesman, The Power Christians

Getty
Show Hide image

How Roger Moore made James Bond immortal

Roger Moore, James Bond actor, has died at the age of 89. 

Unlike every other actor to play James Bond, Roger Moore was already a star when he came to the role. Not a star of motion pictures admittedly, although he had topped the bill in some minor films, but a star in television. The lead of the adventure series Ivanhoe (1958-59) and The Saint (1962-69), the latter of which brought him international fame and reportedly made him the highest paid actor on television.

It was a far cry from his beginnings. Although he lived much of his life abroad (it has been said, for tax reasons, something the actor himself denied) and was regarded by many as the archetypal English gentleman, Moore began life as a working-class Londoner.  Born in Stockwell in 1927, the son of a policeman and his wife, he grew up in a rented three room, third floor flat in SW8, and attended Battersea Grammar School. There, he later insisted "looking as though I was listening", was the only subject at which he excelled. Battersea Grammar was, despite the name, then an overcrowded local school boxed in by the buildings and sidings of Clapham Junction Station and made dark and noisy by the still expanding railways.

As both Moore and his friend and fellow film star Michael Caine have observed, their backgrounds in urban South London are almost identical, something that has never fitted with public perception of either of them. The difference was, as again both noted, that when it came to National Service Moore, unlike Caine, was picked out as officer material and trained accordingly, in the process acquiring the accent he would carry for the rest of his life.

The common, near universal, ignorance of Moore’s origins (although he himself was never shy of them, writing about his family in his various books and discussing them in interviews) says something significant about Roger Moore the public figure. Despite being a household name for decades, an international film star and latterly a knight of the realm, he was, if not misunderstood by his audience, then never really quite what they assumed him to be.

This extends, of course, into his work as an actor. Moore was often mocked by the unimaginative, who saw him as a wooden actor, or one lacking in versatility. Often, he was somehow self-deprecating enough to play along. And yet, the camera loved him, really loved him and his timing - particularly but not exclusively comic - was extraordinary. To see Moore work in close up is to see someone in absolute control of his craft. His raised eyebrow, often mocked, was a precision instrument, exactly as funny or exactly as surprising as he wanted it to be.

It is more accurate, as well as fairer, to say that Moore was typecast, rather than limited, and he made no secret of the fact that he played his two most famous roles, Simon Templar in The Saint and James Bond 007 as essentially the same person. But he would have been a fool not to. Bond producers Harry Saltzman and Albert R "Cubby" Broccoli’s EON productions wanted Templar nearly as much as they wanted Moore.

They had thought of the actor for the part of 007 as early as 1961, before casting Sean Connery and before Moore had played The Saint, so it was not just his success as Templar that made him suitable. Yet both producers knew that audiences in both Britain and America loved the way Moore played Templar, and that if that affection could be translated into ticket sales, their series would be on to a winner.

It was a gamble for all involved. George Lazenby had already tried, and as far many were concerned, failed to replace Connery as James Bond. When it came to 1971’s outing in the series, Diamonds Are Forever, David Picker, head of United Artists, which distributed Bond films, insisted that Connery be brought back for an encore before EON tried a third actor in the role, re-hiring Connery at a then record $1.25m and paying off actor John Gavin, whom EON had already cast. That’s how high the stakes were for both the Bond series and Moore’s reputation when he stepped into the role for 1973’s Live and Let Die. The film was a huge success, so much so that EON rushed out its sequel, The Man With The Golden Gun the next year, rather than after two years as it had planned.

The reason for that success, although the film has many other good qualities, is that Moore is brilliant in it. His whip-thin, gently ironic and oddly egalitarian adventurer, capable of laughing at himself as well as others, is a far cry from Connery’s violently snobbish "joke superman". It’s been said that Connery’s Bond was a working-class boy’s fantasy of what it would be like to be an English gentleman, while Moore’s was essentially the fantasy of a slightly effete middle-class boy who dreams of one day winning a fight. It’s a comprehensive reinvention of the part.

That’s not something that can be achieved by accident. One shouldn’t, however, over-accentuate the lightness of the performance. Moore’s Bond is exactly as capable of rage and even sadism as his predecessor. The whimsy he brings to the part is an addition to, not a subtraction from, the character’s range.

Moore expanded Bond’s emotional palette in other ways too. His best onscreen performance is in For Your Eyes Only (1981), in which the then 53-year-old Moore gets to play a Bond seen grieving at his wife’s grave, lecturing allies on the futility of revenge ("When setting out for revenge, first dig two graves") and brightly turn down a much younger woman’s offer of sex with the phrase "Put your clothes on and I’ll buy you an ice cream". None of which are scenes you can begin to imagine Connery’s Bond pulling off.

Moore was not just a huge success as Bond, he remains, adjusted for inflation, the most financially successful lead actor the series has ever had. He was also successful in a way that guaranteed he would have successors. What he gave to the part by not imitating Connery, by not even hinting at Connery in his performance, was a licence to those who followed him to find their own way in the role. This, along with his continued popularity over twelve years in the role, probably the only reason the series managed to survive the 1970s and the EON’s finally running of Ian Fleming novels to adapt to the screen.

Actors have received knighthoods for their craft for centuries, but when Moore was knighted in 2003, there was some push back. Moore was understandably seen as not being in the same category as an Alec Guinness or a Ralph Richardson. But the citations for Moore's knighthood indicated that it was for his decades of charity work with Unicef that he was being honoured. It’s yet another of the misconceptions, large and small, that aggregated around him.

Moore himself was always clear that it was the profile playing James Bond had given him that made his role with Unicef possible, let alone successful. When asked about pride in his charity work, he always responded that instead he felt frustration. Frustration because as with, for example, the UN’s iodine deficiency programme or Unicef’s work with children with landmine injuries, there was always so much more work to be done than could be done.

It was an answer that, along with his energetic campaigning, at the age of 88, to ban the use of wild animals in zoos, pointed to the biggest misunderstanding of all. Moore was known for playing frivolous characters in over the top entertainments and this led to him being perceived by many, even by those he enjoyed his work, as essentially trivial. Ironically, such an assumption reveals only the superficiality of their own reading. The jovial, wry interviewee Sir Roger Moore was, beneath that raised eyebrow, a profoundly serious man.

0800 7318496