Science, God, and the ultimate evolutionary question

Until science proves the origin of the very first cells, many will wheel out God as the default expl

Until science proves the origin of the very first cells, many will wheel out God as the default explanation.

No-one who has visited Richard Dawkins' website recently would have failed to notice the prominence given to an award being offered of up to 2 million dollars. Unfortunately for most of us, nobody will be granted the funding unless they put together a proposal for scientific research into the origin of life on our planet.

It's hardly a surprise that the site should draw attention to the award. After all, however much we think that we know about evolution, science is far from providing a confident explanation of the origin of the very first cells from which all life evolved. Until this gap in scientific knowledge is filled, many believers will continue to resort to God as the default explanation. For some, it must have been God who planted the first living cells on the planet, before leaving the stage and letting evolution take over. For others, the fact that no-one can prove how life originated sounds the death knell for evolution itself but is music to the ears of those who believe in Adam and Eve.

But are they right? Is science incapable of explaining the emergence of the first cells from which all life originated without the need for God?

In 1953 biologist Stanley Miller set up an experiment in the lab, intended to recreate what scientists call the earth's "primordial soup" when life first appeared 3.5 billion years ago. He created a sealed environment comprising boiling water and electric probes to simulate the effect of lightening on some of the young planet's hot waters. Thrown into the mix were methane, ammonia and hydrogen, the gases believed to be present on the early earth. The aim was to see whether anything related to life would form. Within a week, five amino acids had appeared in the water. This was a stunning result. After all, amino acids are the molecules which join up to form proteins inside living cells.

But to create proteins - and therefore life - amino acids must be strung together in a very specific order. And cells require DNA to do this. But how could something as complex as DNA have come into existence? Miller's experiment didn't answer that.

A possible explanation was found after a meteorite, slightly older than earth, crashed down in Australia in 1969. Amazingly one of the DNA bases was found inside the rock. Since the early earth was bombarded by meteorites for millions of years, this raises the tantalising possibility that DNA and RNA could have arrived here on meteorites around the time that life first appeared on the planet. This provides a partial explanation of how the amino acids could have developed into life.

But there are problems with the idea that life began in a Miller-like primordial soup. Analysis of ancient rocks has made it plain that at the time that life appeared, the earth was no longer rich in methane, ammonia and hydrogen. Besides, any soup would have been thermodynamically flat. This means that there was probably nothing to force the various molecules to react with each other, whether or not extraterrestrial DNA and RNA molecules were also present. And so far, scientists haven't been able to explain how the necessary molecules would have come together without a cell membrane.

But there is a different theory which addresses all these concerns.

It is well-known that the continents have been drifting apart throughout the lifetime of the planet. This is due to the movement of tectonic plates below the oceans. As these plates strike each other, new rocks are exposed to the sea water. This creates alkaline hydrothermal vents. The water physically reacts with the rocks and this releases heat along with gases reminiscent of Miller's experiments. As a result, warm alkaline hydrothermal fluids percolate into the cold oceans and, near the vents, structures are created which look rather like stalagmites and which are riddled with tiny compartments. These compartments could have been ideal places for chemical compounds from the gases to concentrate and combine to form early life in a fairly enclosed environment.

Although the existence of these vents had been predicted decades ago, it wasn't until 2000 that one was discovered in a part of the Atlantic Ocean which has been named Lost City. Scientists have analysed the cell-sized pores in the structures which were found there and concluded that they were almost ideal reaction vessels for producing the first life. What's more, the chemical imbalance between the sea water and the gases could have created an electrical charge which in turn possibly caused the chemical reactions needed to kick-start the creation of life.

But as I mentioned earlier, it's not sufficient to work out how the first amino acids may have appeared. It's also necessary to explain how DNA could have come onto the scene. Unfortunately DNA can't evolve without proteins. And proteins can't evolve without DNA.

Many scientists believe that the answer lies in the RNA World Theory. In 2007 it was discovered that nucleotides (and so RNA) could grow in simulated vents. At around the same time a scientific paper was published which concluded that RNA may have developed by living inside mineral cells in the vents. Biochemist Nick Lane believes once that had happened, RNA may have changed to DNA virtually spontaneously.

And so the hydrothermal vents theory provides a plausible account of how the first life could have formed on earth along with the DNA which was necessary to replicate it. But the theory certainly has difficulties. In fact, a similar theory based on a different type of vents, black smokers, is now generally given short shrift by the scientific community. Perhaps the hydrothermal vents theory will likewise come unstuck.

This is a difficult area of science. No doubt whoever receives that award, will have to work hard to earn every cent.

Andrew Zak Williams has written for the Independent and the Humanist and is a contributor to Skeptic Magazine. His email address is: andrewbelief@gmail.com

Photo: Getty
Show Hide image

How the Conservatives lost the argument over austerity

After repeatedly missing their deficit targets, the Tories can no longer present spending cuts as essential.

“The age of irresponsibility is giving way to the age of austerity,” declared David Cameron at the Conservatives' 2009 spring conference. Fear of spending cuts helped deny his party a majority a year later, but by 2015 the Tories claimed vindication. By framing austerity as unavoidable, they had trapped Labour in a political no man's land. Though voters did not relish cuts, polling consistently showed that they regarded them as necessary.

But only two years later, it is the Conservatives who appear trapped. An austerity-weary electorate has deprived them of their majority and the argument for fiscal restraint is growing weaker by the day. If cuts are the supposed rule, then the £1bn gifted to the Democratic Unionist Party is the most glaring exception. Michael Fallon, the Defence Secretary, sought to justify this largesse as "investment" into "the infrastructure of Northern Ireland" from "which everybody will benefit" – a classic Keynesian argument. But this did not, he hastened to add, mean the end of austerity: "Austerity is never over until we clear the deficit."

Britain's deficit (which peaked at £153bn in 2009-10) was the original and pre-eminent justification for cuts. Unless borrowing was largely eliminated by 2015, George Osborne warned, Britain's public finances would become unsustainable. But as time has passed, this argument has become progressively weaker. The UK has cumulatively borrowed £200bn more than promised by Osborne, yet apocalypse has been averted. With its low borrowing costs, an independent currency and a lender of last resort (the Bank of England), the UK is able to tolerate consistent deficits (borrowing stood at £46.6bn in 2016-17).

In defiance of all this, Osborne vowed to achieve a budget surplus by 2019-20 (a goal achieved by the UK in just 12 years since 1948). The Tories made the target in the knowledge that promised tax cuts and spending increases would make it almost impossible to attain – but it was a political weapon with which to wound Labour.

Brexit, however, forced the Conservatives to disarm. Mindful of the economic instability to come, Philip Hammond postponed the surplus target to 2025 (15 years after Osborne's original goal). Britain's past and future borrowing levels mean the deficit has lost its political potency.

In these circumstances, it is unsurprising that voters are increasingly inclined to look for full-scale alternatives. Labour has remade itself as an unambiguously anti-austerity party and Britain's public realm is frayed from seven years of cuts: overburdened schools and hospitals, dilapidated infrastructure, potholed roads, uncollected bins.

Through a shift in rhetoric, Theresa May acknowledged voters' weariness with austerity but her policies did not match. Though the pace of cuts was slowed, signature measures such as the public sector pay cap and the freeze in working-age benefits endured. May's cold insistence to an underpaid nurse that there was no "magic money tree" exemplified the Tories' predicament.

In his recent Mansion House speech, Philip Hammond conceded that voters were impatient "after seven years of hard slog” but vowed to "make anew the case" for austerity. But other Tories believe they need to stop fighting a losing battle. The Conservatives' historic strength has been their adaptability. Depending on circumstance, they have been Europhile and Eurosceptic, statist and laissez-faire, isolationist and interventionist. If the Tories are to retain power, yet another metamorphosis may be needed: from austerity to stimulus.

George Eaton is political editor of the New Statesman.

0800 7318496