Show Hide image

Why futurologists are always wrong – and why we should be sceptical of techno-utopians

From predicting AI within 20 years to mass-starvation in the 1970s, those who foretell the future often come close to doomsday preachers.

Image: Randy Mora

In his book The Future of the Mind, the excitable physicist and futurologist Michio Kaku mentions Darpa. This is America’s Defence Advanced Research Projects Agency, the body normally credited with creating, among other things, the internet. It gets Kaku in a foam of futurological excitement. “The only justification for its existence is . . .” he says, quoting Darpa’s strategic plan, “to ‘accelerate the future into being’ ”.

This isn’t quite right (and it certainly isn’t literate). What Darpa actually says it is doing is accelerating “that future into being”, the future in question being the specific requirements of military commanders. This makes more sense but is no more literate than Kaku’s version. Never mind; Kaku’s is a catchy phrase. It is not strictly meaningful – the future will arrive at its own pace, no matter how hard we press the accelerator – but we know what he is trying to mean. Technological projects from smartphones to missiles can, unlike the future, be accelerated and, in Kaku’s imagination, such projects are the future.

Meanwhile, over at the Googleplex, the search engine’s headquarters in Silicon Valley, Ray Kurzweil has a new job. He has been hired by Google to “work on new projects involving machine learning and language processing”.

For two reasons I found this appointment pretty surprising. First, I had declined to review Kurzweil’s recent book How to Create a Mind – the basis for Google’s decision to hire him – on the grounds that it was plainly silly, an opinion then supported by a sensationally excoriating review by the philosopher Colin McGinn for the New York Review of Books which pointed out that Kurzweil knew, to a rough approximation, nothing about the subject. And, second, I am not sure a religious fanatic is quite the right man for the job.

OK, Kurzweil doesn’t say he is religious but, in reality, his belief system is structurally identical to that of the Southern hot gospellers who warn of the impending “Rapture”, the moment when the blessed will be taken up into paradise and the rest of us will be left to seek salvation in the turmoil of the Tribulation before Christ returns to announce the end of the world. Kurzweil’s idea of “the singularity” is the Rapture for geeks – in this case the blessed will create an intelligent computer that will give them eternal life either in their present bodies or by uploading them into itself. Like the Rapture, it is thought to be imminent. Kurzweil forecasts its arrival in 2045.

Kaku and Kurzweil are probably the most prominent futurologists in the world today. They are the heirs to a distinct tradition which, in the postwar world, has largely focused on space travel, computers, biology and, latterly, neuroscience.

Futurologists are almost always wrong. Indeed, Clive James invented a word – “Hermie” – to denote an inaccurate prediction by a futurologist. This was an ironic tribute to the cold war strategist and, in later life, pop futurologist Herman Kahn. It was slightly unfair, because Kahn made so many fairly obvious predictions – mobile phones and the like – that it was inevitable quite a few would be right.

Even poppier was Alvin Toffler, with his 1970 book Future Shock, which suggested that the pace of technological change would cause psychological breakdown and social paralysis, not an obvious feature of the Facebook generation. Most inaccurate of all was Paul R Ehrlich who, in The Population Bomb, predicted that hundreds of millions would die of starvation in the 1970s. Hunger, in fact, has since declined quite rapidly.

Perhaps the most significant inaccuracy concerned artificial intelligence (AI). In 1956 the polymath Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do” and in 1967 the cognitive scientist Marvin Minsky announced that “within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved”. Yet, in spite of all the hype and the dizzying increases in the power and speed of computers, we are nowhere near creating a thinking machine.

Such a machine is the basis of Kurzweil’s singularity, but futurologists seldom let the facts get in the way of a good prophecy. Or, if they must, they simply move on. The nightmarishly intractable problem of space travel has more or less killed that futurological category and the unexpected complexities of genetics have put that on the back burner for the moment, leaving neuroscientists to take on the prediction game. But futurology as a whole is in rude health despite all the setbacks.

Why? Because there’s money in it; money and faith. I don’t just mean the few millions to be made from book sales; nor do I mean the simple geek belief in gadgetry. And I certainly don’t mean the pallid, undefined, pop-song promises of politicians trying to turn our eyes from the present – Bill Clinton’s “Don’t stop thinking about tomorrow” and Tony Blair’s “Things can only get better”. No, I mean the billions involved in corporate destinies and the yearning for salvation from our human condition. The future has become a land-grab for Wall Street and for the more dubious hot gospellers who have plagued America since its inception and who are now preaching
to the world.

Take the curious phenomenon of the Ted talk. Ted – Technology, Entertainment, Design – is a global lecture circuit propagating “ideas worth spreading”. It is huge. Half a billion people have watched the 1,600 Ted talks that are now online. Yet the talks are almost parochially American. Some are good but too many are blatant hard sells and quite a few are just daft. All of them lay claim to the future; this is another futurology land-grab, this time globalised and internet-enabled.

Benjamin Bratton, a professor of visual arts at the University of California, San Diego, has an astrophysicist friend who made a pitch to a potential donor of research funds. The pitch was excellent but he failed to get the money because, as the donor put it, “You know what, I’m gonna pass because I just don’t feel inspired . . . you should be more like Malcolm Gladwell.” Gladwellism – the hard sell of a big theme supported by dubious, incoherent but dramatically presented evidence – is the primary Ted style. Is this, wondered Bratton, the basis on which the future should be planned? To its credit, Ted had the good grace to let him give a virulently anti-Ted talk to make his case. “I submit,” he told the assembled geeks, “that astrophysics run on the model of American Idol is a recipe for civilisational disaster.”

Bratton is not anti-futurology like me; rather, he is against simple-minded futurology. He thinks the Ted style evades awkward complexities and evokes a future in which, somehow, everything will be changed by technology and yet the same. The geeks will still be living their laid-back California lifestyle because that will not be affected by the radical social and political implications of the very technology they plan to impose on societies and states. This is a naive, very local vision of heaven in which everybody drinks beer and plays baseball and the sun always shines.

The reality, as the revelations of the National Security Agency’s near-universal surveillance show, is that technology is just as likely to unleash hell as any other human enterprise. But the primary Ted faith is that the future is good simply because it is the future; not being the present or the past is seen as an intrinsic virtue.

Bratton, when I spoke to him, described some of the futures on offer as “anthrocidal” – indeed, Kurzweil’s singularity is often celebrated as the start of a “post-human” future. We are the only species that actively pursues and celebrates the possibility of its own extinction.

Bratton was also very clear about the religiosity that lies behind Tedspeak. “The eschatological theme within all this is deep within the American discourse, a positive and negative eschatology,” he said. “There are a lot of right-wing Christians who are obsessed with the Mark of the Beast. It’s all about the Antichrist . . . Maybe it’s more of a California thing – this messianic articulation of the future is deep within my culture, so maybe it is not so unusual to me.”

Bratton also speaks of “a sort of backwash up the channel back into academia”. His astrophysicist friend was judged by Ted/Gladwell values and found wanting. This suggests a solution to the futurologists’ problem of inaccuracy: they actually enforce rather than merely predict the future by rigging the entire game. It can’t work, but it could do severe damage to scientific work before it fails.

Perhaps even more important is the political and social damage that may be done by the future land-grab being pursued by the big internet companies. Google is the leading grabber simply because it needs to keep growing its primary product – online advertising, of which it already possesses a global monopoly. Eric Schmidt, having been displaced as chief executive, is now, as executive chairman, effectively in charge of global PR. His futurological book The New Digital Age, co-written with Jared Cohen, came decorated with approving quotes from Richard Branson, Henry Kissinger, Tony Blair and Bill Clinton, indicating that this is the officially approved future of the new elites, who seem, judging by the book’s contents, intent on their own destruction – oligocide rather an anthrocide.

For it is clear from The New Digital Age that politicians and even supposedly hip leaders in business will have very little say in what happens next. The people, of course, will have none. Basically, most of us will be locked in to necessary digital identities and, if we are not, we will be terrorist suspects. Privacy will become a quaint memory. “If you have something that you don’t want anyone to know,” Schmidt famously said in 2009, “maybe you shouldn’t be doing it [online] in the first place.” So Google elects itself supreme moral arbiter.

Tribalism in the new digital age will increase and “disliked communities” will find themselves maginalised. Nobody seems to have any oversight over anything. It is a hellish vision but the point, I think, is that it is all based on the assumption that companies such as Google will get what they want – absolute and unchallengeable access to information.

As the book came out, Larry Page, the co-founder of Google, unwisely revealed the underlying theme of this thinking in a casual conversation with journalists. “A law can’t be right,” he said, “if it’s 50 years old. Like, it’s before the internet.” He also suggested “we should set aside some small part of the world”, which would be free from regulation so that Googlers could get on with hardcore innovation. Above the law and with their own island state, the technocrats could rule the world with impunity. Peter Thiel, the founder of PayPal, is trying to make exactly that happen with his Seasteading Institute, which aims to build floating cities in international waters. “An open frontier,” he calls it, “for experimenting with new ideas in government.” If you’re an optimist this is just mad stuff; if you’re a pessimist it is downright evil.

One last futurological, land-grabbing fad of the moment remains to be dealt with: neuroscience. It is certainly true that scanners, nanoprobes and supercomputers seem to be offering us a way to invade human consciousness, the final frontier of the scientific enterprise. Unfortunately, those leading us across this frontier are dangerously unclear about the meaning of the word “scientific”.

Neuroscientists now routinely make claims that are far beyond their competence, often prefaced by the words “We have found that . . .” The two most common of these claims are that the conscious self is a illusion and there is no such thing as free will. “As a neuroscientist,” Professor Patrick Haggard of University College London has said, “you’ve got to be a determinist. There are physical laws, which the electrical and chemical events in the brain obey. Under identical circumstances, you couldn’t have done otherwise; there’s no ‘I’ which can say ‘I want to do otherwise’.”

The first of these claims is easily dismissed – if the self is an illusion, who is being deluded? The second has not been established scientifically – all the evidence on which the claim is made is either dubious or misinterpreted – nor could it be established, because none of the scientists seems to be fully aware of the complexities of definition involved. In any case, the self and free will are foundational elements of all our discourse and that includes science. Eliminate them from your life if you like but, by doing so, you place yourself outside human society. You will, if you are serious about this displacement, not be understood. You will, in short, be a zombie.

Yet neuroscience – as in Michio Kaku’s manic book of predictions – is now one of the dominant forms of futurological chatter. We are, it is said, on the verge of mapping, modelling and even replicating the human brain and, once we have done that, the mechanistic foundations of the mind will be exposed. Then we will be able to enhance, empower or (more likely) control the human world in its entirety. This way, I need hardly point out, madness lies.

The radicalism implied, though not yet imposed, by our current technologies is indeed as alarming to the sensitive and thoughtful as it is exciting to the geeks. Benjamin Bratton is right to describe some of it as anthrocidal; both in the form of “the singularity” and in some of the ideas coming from neuroscience, the death of the idea of the human being is involved. If so, it is hard to see why we should care: the welfare of a computer or the fate of a neuroscientifically specified zombie would not seem to be pressing matters. In any case, judging by past futurologies, none of these things is likely to happen.

What does matter is what our current futurologies say about the present. At one level, they say we are seriously deluded. As Bratton observes, the presentational style of Ted and of Gladwell involves embracing radical technologies while secretly believing that nothing about our own cherished ways of life will change; the geeks will still hang out, crying “Woo-hoo!” and chugging beer when the gadgets are unveiled.

At another level, futurology implies that we are unhappy in the present. Perhaps this is because the constant, enervating downpour of gadgets and the devices of the marketeers tell us that something better lies just around the next corner and, in our weakness, we believe. Or perhaps it was ever thus. In 1752, Dr Johnson mused that our obsession with the future may be an inevitable adjunct of the human mind. Like our attachment to the past, it is an expression of our inborn inability to live in – and be grateful for – the present.

“It seems,” he wrote, “to be the fate of man to seek all his consolations in futurity. The time present is seldom able to fill desire or imagination with immediate enjoyment, and we are forced to supply its deficiencies by recollection or anticipation.”

Bryan Appleyard is the author of “The Brain Is Wider Than the Sky: Why Simple Solutions Don’t Work in a Complex World” (Phoenix, £9.99)

ahisgett - Flickr
Show Hide image

Sunjeev Sahota’s The Year of the Runaways: a subtle study of “economic migration”

Sahota’s Man Booker-shortlisted novel goes to places we would all rather not think about.

This summer’s crisis has reinforced the ­distinction that is often made between refugees, who deserve sanctuary because they are fleeing from conflict, and “economic migrants”, those coming to Europe in pursuit of “the good life”, who must be repelled at any cost. The entire bureaucratic and punitive capacity of our immigration system is pitted against these ne’er-do-wells and their impudent aspirations.

Sunjeev Sahota’s fine second novel, The Year of the Runaways, now shortlisted for the Man Booker Prize, takes a closer look at “economic migration”. Why do people – many of them educated, from loving families in peaceful communities – leave their old lives behind and come to Britain? Are they fleeing desperate circumstances or are they on the make? When they arrive here, do they find what they were looking for? Should we welcome them, or try to persuade them to stay at home? The book illuminates all of these questions while, much to its credit, offering no simple answers.

Sahota interweaves the stories of three people whose reasons for emigrating are as individual as they are. Both Avtar and Randeep are from Indian Sikh families that might be characterised as lower-middle-class. Avtar’s father has his own small business – a shawl shop – and Randeep’s father works for the government. Both boys are educated and Avtar, in particular, is smart and motivated. But with employment hard to come by and no social security net to fall back on, it doesn’t take much to make leaving the country seem like the only option. Avtar loses his job, his father’s business is failing and he has high hopes of earning enough to marry Lakhpreet, his girlfriend-on-the-sly. Randeep’s family’s finances fall apart after his father has a psychological breakdown; their only hope of maintaining a respectable lifestyle is for their eldest son to take his chances abroad.

For Tochi, the situation is very different. He is what used to be called an “untouchable” and, although people now use euphemisms (“scheduled”, or chamaar), the taboo remains as strong as ever. He comes to Britain not so much for financial reasons – although he is the poorest of the lot – but to escape the prejudice that killed his father, mother and pregnant sister.

Tying these disparate stories together is the book’s most intriguing character, Narinder, a British Sikh woman who comes to believe that it is her spiritual calling to rescue a desperate Indian by “visa marriage”. Narinder’s progress, from the very limited horizons for an obedient young woman to a greater sense of herself as an active participant in her destiny, reminded me of Nazneen, the protagonist in Monica Ali’s Brick Lane. But Narinder is a more thoughtful character and here the Hollywood-style journey of personal liberation is tempered by a recognition of the powerful bonds of tradition and family.

Once in Britain, Avtar, Randeep and Tochi enter a world of gangmasters, slum accommodation and zero job security, with an ever-present fear of “raids” by immigration officers. They work in fried chicken shops, down sewers, on building sites and cleaning nightclubs. Health care is off-limits for fear of immigration checks. Food is basic and the only charity comes from the gurdwara, or Sikh temple, which provides help in emergencies.

Avtar and Randeep struggle to send money back home while living in poverty and squalor that their families could barely imagine (at one point, Randeep notes with understandable bitterness that his mother has used his hard-earned contributions to buy herself a string of pearls). In the meantime, their desperation leads them to increasingly morally repellent behaviour, from selfishness to stealing and worse. Even if they do eventually find a measure of economic stability in Britain, they have done so at the cost of their better selves.

It has been pointed out that the novels on the Man Booker shortlist this year are even more depressing than usual and The Year of the Runaways certainly won’t have raised the laugh count. At times I had to put it down for a while, overwhelmed by tragedy after tragedy. It was the quality of Sahota’s prose and perceptions that brought me back. He is a wonderfully subtle writer who makes what he leaves unsaid as important as the words on the page. A wise and compassionate observer of humanity, he has gone to some dark places – places we would all rather not think about – to bring us this book. Whether we are prepared to extend a measure of his wisdom and compassion to real immigrants, in the real world, is another question.

“The Year of the Runaways” by Sunjeev Sahota is published by Picador (480pp, £14.99)

Alice O'Keeffe is an award-winning journalist and former arts editor of the New Statesman. She now works as a freelance writer and looks after two young children. You can find her on Twitter as @AliceOKeeffe.

This article first appeared in the 08 October 2015 issue of the New Statesman, Putin vs Isis

Show Hide image

What Jeremy Corbyn can learn from Orwell

Corbyn’s ideas may echo George Orwell’s – but they’d need Orwell’s Britain to work. It’s time Corbyn accepted the British as they are today.

All Labour Party leaderships since 1900 have offered themselves as “new”, but Tony Blair’s succession in 1994 triggered a break with the past so ruthless that the Labour leadership virtually declared war on the party. Now it is party members’ turn and they, for now at any rate, think that real Labour is Jeremy.

To Keir Hardie, real Labour had been a trade union lobby expounding Fellowship. To the Webbs, real Labour was “common ownership” by the best means available. Sidney’s Clause Four (adopted 1918) left open what that might be. In the 1920s, the Christian Socialist R H Tawney stitched Equality into the banner, but during the Depression young intellectuals such as Evan Durbin and Hugh Gaitskell designated Planning as Labour’s modern mission. After the Second World War, Clement Attlee followed the miners (and the London Passenger Transport Board) into Nationalisation. Harold Wilson tried to inject Science and Technology into the mix but everything after that was an attempt to move Labour away from state-regulated markets and in the direction of market-regulated states.

What made the recent leadership contest so alarming was how broken was the intellectual tradition. None of the candidates made anything of a long history of thinking about the relationship between socialism and what the people want. Yvette Cooper wanted to go over the numbers; only they were the wrong numbers. Andy Burnham twisted and turned. Liz Kendall based her bid on two words: “Have me.” Only Jeremy Corbyn seemed to have any kind of Labour narrative to tell and, of course, ever the ­rebel, he was not responsible for any of it. His conference address in Brighton was little more than the notes of a street-corner campaigner to a small crowd.

Given the paucity of thinking, and this being an English party for now, it is only a matter of time before George Orwell is brought in to see how Jeremy measures up. In fact, it’s happened already. Rafael Behr in the Guardian and Nick Cohen in the Spectator both see him as the kind of hard-left intellectual Orwell dreaded, while Charles Cooke in the National Review and Jason Cowley in the New Statesman joined unlikely fashion forces to take a side-look at Jeremy’s dreadful dress sense – to Orwell, a sure sign of a socialist. Cooke thought he looked like a “burned-out geography teacher at a third-rate comprehensive”. Cowley thought he looked like a red-brick university sociology lecturer circa 1978. Fair enough. He does. But there is more. Being a middle-class teetotal vegetarian bicycling socialistic feministic atheistic metropolitan anti-racist republican nice guy, with allotment and “squashily pacifist” leanings to match, clearly puts him in the land of the cranks as described by Orwell in The Road to Wigan Pier (1937) – one of “that dreary tribe of high-minded women and sandal-wearers and bearded fruit-juice drinkers who come flocking towards the smell of ‘progress’ like bluebottles to a dead cat”. And though Corbyn, as “a fully fledged, fully bearded, unabashed socialist” (Huffington Post), might make all true Orwellians twitch, he really made their day when he refused to sing the National Anthem. Orwell cited precisely that (see “The Lion and the Unicorn”, 1941) as an example of the distance between left-wing intellectuals and the people. It seemed that, by standing there, mouth shut, Comrade Corbyn didn’t just cut his wrists, he lay down full length in the coffin and pulled the lid shut.


Trouble is, this line of attack not only misrepresents the Labour leader, it misrepresents Orwell. For the great man was not as unflinchingly straight and true as some people think. It is impossible, for instance, to think of Orwell singing “God Save the King”, because he, too, was one of that “dreary tribe” of London lefties, and even when he joined Labour he remained ever the rebel. As for Corbyn, for a start, he is not badly dressed. He just doesn’t look like Chuka or Tristram. He may look like a threadbare schoolteacher, but Orwell was one twice over. Orwell was never a vegetarian or a teetotaller, but, like Corbyn, neither was he interested in fancy food (or drink), he kept an allotment, drove a motorbike, bicycled, cared about the poor, cared about the environment, loathed the empire, came close to pacifism at one point, and opposed war with Germany well past the time when it was reasonable to do so.

In Orwell’s thinking about socialism, for too long his main reference point was the London Marxist left. Not only did he make speeches in favour of revolutions, he took part in one with a gun in his hand. Orwell was far more interested, as Corbyn has been far more interested, in speaking truth to power than in holding office. His loyalty was to the movement, or at least the idea of the movement, not to MPs or the front bench, which he rarely mentioned. There is nothing in Corbyn’s position that would have shocked Orwell and, should they have met, there’d have been much to talk about: belief in public ownership and non-economic values, confidence in the state’s ability to make life better, progressive taxation, national health, state education, social care, anti-socially useless banking, anti-colonialism and a whole lot of other anti-isms besides. It’s hard to be sure what Orwell’s position would have been on Trident and immigration. Not Corbyn’s, I suspect. He was not as alert to feminism as he might have been but equally, few men try to write novels from a woman’s point of view and all Orwellians recognise that Julia is the dark hero of Nineteen Eighty-Four. In truth they are both austere types, not in it for themselves and not on anyone else’s expense account either. Corbyn won the leadership because this shone through from the very beginning. He came across as unaffected and straightforward – much as Orwell tried to be in his writing.

Except, as powerfully expressed in these pages by John Gray, Corbyn’s politics were made for another world. What sort of world would he need? First off, he’d need a regulated labour market: regulated by the state in partnership with a labour movement sensitive to what people wanted and experienced in trying to provide it. He would also need capital controls, a manufacturing base capable of building the new investment with Keynesian payback, an efficient and motivated Inland Revenue, a widespread public-service ethos that sees the country as an asset, not a market, and an overwhelming democratic mandate to get things done. In other words, Corbyn needs Orwell’s Britain – not this one – and at the very least, if he can’t have that, he needs the freedom to act that the European Commission forbids.

There’s another problem. Orwell did not trust left-wing intellectuals and spent half his life trying to work out their motivations as a class who spoke for the people, went in search of the people, and praised the people, but did not know them or believe in them. True, Corbyn says he wants to be open and inclusive, but we know he can’t possibly mean it when he says it will be the party, not him or the PLP, that will decide policy, just as we knew it couldn’t possibly be true when he said he’d turn PMQs into the People’s Question Time. Jeremy hasn’t changed his mind in forty years, appears to have great difficulty (unlike Tony Benn) in fusing socialism to national identity or experience (Hardie, Ben Okri and Maya Angelou were bolted on to his Brighton speech) and seems to think that not being happy with what you are given somehow captures the historic essence of socialism (rather than its opposite).

Granted, not thinking outside the ­circle is an inherent fault of the sectarian left but some of our most prominent left-wing journalists have it, too. Working-class support for nationalisation? Good. Right answer! Working-class opposition to benefit scroungers and further mass immigration? Bad. Wrong answer! Would you like to try again? In his essay “In Defence of Comrade Zilliacus” (1947) Orwell reckoned that left-wing intellectuals saw only what they wanted to see. For all their talk of representing the people, they hated the masses. “What they are frightened of is the prevailing opinion within their own group . . . there is always an orthodoxy, a parrot-cry . . .”

The game is hard and he may go down in a welter of knives, yet Corbyn still has time. He may go on making the same speech – on the benefits of apple pie to apple growers – but at some point he will have to drop the wish-list and get on the side of the British people as they are, and live with that, and build into it. Only the nation state can even begin to do the things he wants to do. The quicker he gets that, the quicker we can see if the latest incarnation of new Labour has a future.

Robert Colls is the author of “George Orwell: English Rebel” (Oxford University Press)

This article first appeared in the 08 October 2015 issue of the New Statesman, Putin vs Isis