New Times,
New Thinking.

  1. Long reads
10 April 2014

Why futurologists are always wrong – and why we should be sceptical of techno-utopians

From predicting AI within 20 years to mass-starvation in the 1970s, those who foretell the future often come close to doomsday preachers.

By Bryan Appleyard

Image: Randy Mora

In his book The Future of the Mind, the excitable physicist and futurologist Michio Kaku mentions Darpa. This is America’s Defence Advanced Research Projects Agency, the body normally credited with creating, among other things, the internet. It gets Kaku in a foam of futurological excitement. “The only justification for its existence is . . .” he says, quoting Darpa’s strategic plan, “to ‘accelerate the future into being’ ”.

This isn’t quite right (and it certainly isn’t literate). What Darpa actually says it is doing is accelerating “that future into being”, the future in question being the specific requirements of military commanders. This makes more sense but is no more literate than Kaku’s version. Never mind; Kaku’s is a catchy phrase. It is not strictly meaningful – the future will arrive at its own pace, no matter how hard we press the accelerator – but we know what he is trying to mean. Technological projects from smartphones to missiles can, unlike the future, be accelerated and, in Kaku’s imagination, such projects are the future.

Meanwhile, over at the Googleplex, the search engine’s headquarters in Silicon Valley, Ray Kurzweil has a new job. He has been hired by Google to “work on new projects involving machine learning and language processing”.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

For two reasons I found this appointment pretty surprising. First, I had declined to review Kurzweil’s recent book How to Create a Mind – the basis for Google’s decision to hire him – on the grounds that it was plainly silly, an opinion then supported by a sensationally excoriating review by the philosopher Colin McGinn for the New York Review of Books which pointed out that Kurzweil knew, to a rough approximation, nothing about the subject. And, second, I am not sure a religious fanatic is quite the right man for the job.

OK, Kurzweil doesn’t say he is religious but, in reality, his belief system is structurally identical to that of the Southern hot gospellers who warn of the impending “Rapture”, the moment when the blessed will be taken up into paradise and the rest of us will be left to seek salvation in the turmoil of the Tribulation before Christ returns to announce the end of the world. Kurzweil’s idea of “the singularity” is the Rapture for geeks – in this case the blessed will create an intelligent computer that will give them eternal life either in their present bodies or by uploading them into itself. Like the Rapture, it is thought to be imminent. Kurzweil forecasts its arrival in 2045.

Kaku and Kurzweil are probably the most prominent futurologists in the world today. They are the heirs to a distinct tradition which, in the postwar world, has largely focused on space travel, computers, biology and, latterly, neuroscience.

Futurologists are almost always wrong. Indeed, Clive James invented a word – “Hermie” – to denote an inaccurate prediction by a futurologist. This was an ironic tribute to the cold war strategist and, in later life, pop futurologist Herman Kahn. It was slightly unfair, because Kahn made so many fairly obvious predictions – mobile phones and the like – that it was inevitable quite a few would be right.

Even poppier was Alvin Toffler, with his 1970 book Future Shock, which suggested that the pace of technological change would cause psychological breakdown and social paralysis, not an obvious feature of the Facebook generation. Most inaccurate of all was Paul R Ehrlich who, in The Population Bomb, predicted that hundreds of millions would die of starvation in the 1970s. Hunger, in fact, has since declined quite rapidly.

Perhaps the most significant inaccuracy concerned artificial intelligence (AI). In 1956 the polymath Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do” and in 1967 the cognitive scientist Marvin Minsky announced that “within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved”. Yet, in spite of all the hype and the dizzying increases in the power and speed of computers, we are nowhere near creating a thinking machine.

Such a machine is the basis of Kurzweil’s singularity, but futurologists seldom let the facts get in the way of a good prophecy. Or, if they must, they simply move on. The nightmarishly intractable problem of space travel has more or less killed that futurological category and the unexpected complexities of genetics have put that on the back burner for the moment, leaving neuroscientists to take on the prediction game. But futurology as a whole is in rude health despite all the setbacks.

Why? Because there’s money in it; money and faith. I don’t just mean the few millions to be made from book sales; nor do I mean the simple geek belief in gadgetry. And I certainly don’t mean the pallid, undefined, pop-song promises of politicians trying to turn our eyes from the present – Bill Clinton’s “Don’t stop thinking about tomorrow” and Tony Blair’s “Things can only get better”. No, I mean the billions involved in corporate destinies and the yearning for salvation from our human condition. The future has become a land-grab for Wall Street and for the more dubious hot gospellers who have plagued America since its inception and who are now preaching
to the world.

Take the curious phenomenon of the Ted talk. Ted – Technology, Entertainment, Design – is a global lecture circuit propagating “ideas worth spreading”. It is huge. Half a billion people have watched the 1,600 Ted talks that are now online. Yet the talks are almost parochially American. Some are good but too many are blatant hard sells and quite a few are just daft. All of them lay claim to the future; this is another futurology land-grab, this time globalised and internet-enabled.

Benjamin Bratton, a professor of visual arts at the University of California, San Diego, has an astrophysicist friend who made a pitch to a potential donor of research funds. The pitch was excellent but he failed to get the money because, as the donor put it, “You know what, I’m gonna pass because I just don’t feel inspired . . . you should be more like Malcolm Gladwell.” Gladwellism – the hard sell of a big theme supported by dubious, incoherent but dramatically presented evidence – is the primary Ted style. Is this, wondered Bratton, the basis on which the future should be planned? To its credit, Ted had the good grace to let him give a virulently anti-Ted talk to make his case. “I submit,” he told the assembled geeks, “that astrophysics run on the model of American Idol is a recipe for civilisational disaster.”

Bratton is not anti-futurology like me; rather, he is against simple-minded futurology. He thinks the Ted style evades awkward complexities and evokes a future in which, somehow, everything will be changed by technology and yet the same. The geeks will still be living their laid-back California lifestyle because that will not be affected by the radical social and political implications of the very technology they plan to impose on societies and states. This is a naive, very local vision of heaven in which everybody drinks beer and plays baseball and the sun always shines.

The reality, as the revelations of the National Security Agency’s near-universal surveillance show, is that technology is just as likely to unleash hell as any other human enterprise. But the primary Ted faith is that the future is good simply because it is the future; not being the present or the past is seen as an intrinsic virtue.

Bratton, when I spoke to him, described some of the futures on offer as “anthrocidal” – indeed, Kurzweil’s singularity is often celebrated as the start of a “post-human” future. We are the only species that actively pursues and celebrates the possibility of its own extinction.

Bratton was also very clear about the religiosity that lies behind Tedspeak. “The eschatological theme within all this is deep within the American discourse, a positive and negative eschatology,” he said. “There are a lot of right-wing Christians who are obsessed with the Mark of the Beast. It’s all about the Antichrist . . . Maybe it’s more of a California thing – this messianic articulation of the future is deep within my culture, so maybe it is not so unusual to me.”

Bratton also speaks of “a sort of backwash up the channel back into academia”. His astrophysicist friend was judged by Ted/Gladwell values and found wanting. This suggests a solution to the futurologists’ problem of inaccuracy: they actually enforce rather than merely predict the future by rigging the entire game. It can’t work, but it could do severe damage to scientific work before it fails.

Perhaps even more important is the political and social damage that may be done by the future land-grab being pursued by the big internet companies. Google is the leading grabber simply because it needs to keep growing its primary product – online advertising, of which it already possesses a global monopoly. Eric Schmidt, having been displaced as chief executive, is now, as executive chairman, effectively in charge of global PR. His futurological book The New Digital Age, co-written with Jared Cohen, came decorated with approving quotes from Richard Branson, Henry Kissinger, Tony Blair and Bill Clinton, indicating that this is the officially approved future of the new elites, who seem, judging by the book’s contents, intent on their own destruction – oligocide rather an anthrocide.

For it is clear from The New Digital Age that politicians and even supposedly hip leaders in business will have very little say in what happens next. The people, of course, will have none. Basically, most of us will be locked in to necessary digital identities and, if we are not, we will be terrorist suspects. Privacy will become a quaint memory. “If you have something that you don’t want anyone to know,” Schmidt famously said in 2009, “maybe you shouldn’t be doing it [online] in the first place.” So Google elects itself supreme moral arbiter.

Tribalism in the new digital age will increase and “disliked communities” will find themselves maginalised. Nobody seems to have any oversight over anything. It is a hellish vision but the point, I think, is that it is all based on the assumption that companies such as Google will get what they want – absolute and unchallengeable access to information.

As the book came out, Larry Page, the co-founder of Google, unwisely revealed the underlying theme of this thinking in a casual conversation with journalists. “A law can’t be right,” he said, “if it’s 50 years old. Like, it’s before the internet.” He also suggested “we should set aside some small part of the world”, which would be free from regulation so that Googlers could get on with hardcore innovation. Above the law and with their own island state, the technocrats could rule the world with impunity. Peter Thiel, the founder of PayPal, is trying to make exactly that happen with his Seasteading Institute, which aims to build floating cities in international waters. “An open frontier,” he calls it, “for experimenting with new ideas in government.” If you’re an optimist this is just mad stuff; if you’re a pessimist it is downright evil.

One last futurological, land-grabbing fad of the moment remains to be dealt with: neuroscience. It is certainly true that scanners, nanoprobes and supercomputers seem to be offering us a way to invade human consciousness, the final frontier of the scientific enterprise. Unfortunately, those leading us across this frontier are dangerously unclear about the meaning of the word “scientific”.

Neuroscientists now routinely make claims that are far beyond their competence, often prefaced by the words “We have found that . . .” The two most common of these claims are that the conscious self is a illusion and there is no such thing as free will. “As a neuroscientist,” Professor Patrick Haggard of University College London has said, “you’ve got to be a determinist. There are physical laws, which the electrical and chemical events in the brain obey. Under identical circumstances, you couldn’t have done otherwise; there’s no ‘I’ which can say ‘I want to do otherwise’.”

The first of these claims is easily dismissed – if the self is an illusion, who is being deluded? The second has not been established scientifically – all the evidence on which the claim is made is either dubious or misinterpreted – nor could it be established, because none of the scientists seems to be fully aware of the complexities of definition involved. In any case, the self and free will are foundational elements of all our discourse and that includes science. Eliminate them from your life if you like but, by doing so, you place yourself outside human society. You will, if you are serious about this displacement, not be understood. You will, in short, be a zombie.

Yet neuroscience – as in Michio Kaku’s manic book of predictions – is now one of the dominant forms of futurological chatter. We are, it is said, on the verge of mapping, modelling and even replicating the human brain and, once we have done that, the mechanistic foundations of the mind will be exposed. Then we will be able to enhance, empower or (more likely) control the human world in its entirety. This way, I need hardly point out, madness lies.

The radicalism implied, though not yet imposed, by our current technologies is indeed as alarming to the sensitive and thoughtful as it is exciting to the geeks. Benjamin Bratton is right to describe some of it as anthrocidal; both in the form of “the singularity” and in some of the ideas coming from neuroscience, the death of the idea of the human being is involved. If so, it is hard to see why we should care: the welfare of a computer or the fate of a neuroscientifically specified zombie would not seem to be pressing matters. In any case, judging by past futurologies, none of these things is likely to happen.

What does matter is what our current futurologies say about the present. At one level, they say we are seriously deluded. As Bratton observes, the presentational style of Ted and of Gladwell involves embracing radical technologies while secretly believing that nothing about our own cherished ways of life will change; the geeks will still hang out, crying “Woo-hoo!” and chugging beer when the gadgets are unveiled.

At another level, futurology implies that we are unhappy in the present. Perhaps this is because the constant, enervating downpour of gadgets and the devices of the marketeers tell us that something better lies just around the next corner and, in our weakness, we believe. Or perhaps it was ever thus. In 1752, Dr Johnson mused that our obsession with the future may be an inevitable adjunct of the human mind. Like our attachment to the past, it is an expression of our inborn inability to live in – and be grateful for – the present.

“It seems,” he wrote, “to be the fate of man to seek all his consolations in futurity. The time present is seldom able to fill desire or imagination with immediate enjoyment, and we are forced to supply its deficiencies by recollection or anticipation.”

Bryan Appleyard is the author of “The Brain Is Wider Than the Sky: Why Simple Solutions Don’t Work in a Complex World” (Phoenix, £9.99)

Content from our partners
The UK’s skills shortfall is undermining growth
<strong>What kind of tax reforms would stimulate growth?</strong>