Former NS editor John Freeman reacts to JFK's death: The man we trusted

29 November 1963: "The shock and the grief are universal and so great. Emotions have poured out - and they have gilded the truth."

The most grievous assassination in modern history has transformed John Kennedy from an embattled president, deadlocked with a hostile and suspicious Congress, into the brightest legend of our time. It was inevitable. The shock and the grief are universal and so great. Emotions have poured out – and they have gilded the truth. Yet that too may be misleading, for the emotions were part of the truth; and if Kennedy is remembered along with Lincoln and FDR as one of the great presidents, it will be more because he captured the imagination of a whole generation in almost every corner of the world than because he succeeded in fulfilling the purposes to which he dedicated his presidency. 
His great achievement, for which the world outside America chiefly honours him this week, was his leadership of the western alliance. When he took over, we walked in the shadow of nuclear war. Two years and ten months later, the dialogue between the White House and the Kremlin has proceeded so far that no one can doubt the genuineness of Khrushchev’s dismay at the young President’s death. Yet he wrought this change without any surrender of vital interest, by strength and not by weakness. He persuaded Khrushchev that negotiations were practicable, because he was himself clear about what could be negotiated – and firm about what could not. The test-ban treaty and the hotline are the visible signs of a business relation between the Soviet bloc and the West, in which each side recognises the power of the other and the suicidal folly of pressing points of difference to the brink of war. The differences still exist; the Cold War goes on; errors of judgment by less sagacious men on either side can still plunge us all to catastrophe; there is no more than an agreement to disagree – but that, after all, is the essential prelude to an eventual harmony. 
Kennedy’s achievement in all this was not one-sided. Nuclear war would be as deadly to Russia as to the West, and Khrushchev has played his part. But few would deny that the initiative has lain most of the time with the White House or that Kennedy’s own qualities have been decisive. The three personal gifts which lifted him into the realm of international statesmanship were intellect, steadiness of nerve and the capacity to take decisions. Indeed, this week’s inevitable anxiety about the future is based not on half-baked guesses about President Johnson’s capacity or intelligence as a politician, but on the fact that the decision-making machine – largely extra-governmental – which Kennedy created proved so uniquely well-suited to the strategic demands of the Cold War. The doubt must exist whether President Johnson, operating through more normal political channels, will be able to match the speed, logic and certainty of his predecessor. For Kennedy’s decisions were his own. The professors, the soldiers, the computers, seldom the professional politicians, were detailed to provide the data and rehearse the arguments. The President listened, reflected, balanced the equation and, fortified by all that intellect and calculation could bring to bear, finally took the decision.
Naturally this method of government was unpopular on Capitol Hill, and the unpopularity was reflected in Kennedy’s inability to secure the legislation he needed to implement his domestic policies. This inability amounted to something like failure. Whether it stemmed fundamentally from a lack of profound conviction about liberal causes with which he was saddled by his 1960 campaign managers, or from the intellectual’s contempt for the log-rolling of the workaday politicians, or from over-caution about the electoral consequences of controversy, or from a constitutional inadequacy of Congress to live with the speed of modern decision-making will long be argued by American historians. What we can say this week is that, despite his visible achievement in foreign affairs, the quality of Kennedy’s presidency as a whole – apart from the noble and historic decision to stake the whole prestige of the presidency on his civil rights legislation – is arguable. 
His quality as a man is to me beyond argument. He brought to public life not only the hard assets of leadership, but the rarest capacity to illuminate ideas by the grace of his personality and the clarity of his speech. One can only guess, for instance, at the legislative outcome of his battle with Congress and his own party over civil rights. But one can be sure that individual American opinion about the cause of justice for the Negroes has been touched, as never since Lincoln, by the words he spoke. 
Perhaps his greatest achievement in the end was to turn the gaze of his own people towards some of the more distant goals of political action and to infuse his pragmatic programmes with the radiant light of tolerance, idealism and purpose. If so, the glossy wrappings of the New Frontier may be remembered as a permanent landmark in the evolution of American democracy.
“And so, my fellow Americans: ask not what your country can do for you – ask what you can do for your country. My fellow citizens of the world: ask not what America will do for you, but what together we can do for the freedom of man.” Those words struck the keynote of his inaugural address; they form a message which evokes a response in every radical heart. However limited his social achievement, his approach to politics was fundamentally a challenge to conservatism everywhere. That is why, with all our reservations about where his ultimate convictions lay and with all our disappointment at his comparative failure to make good the promise of 1960, the left in Britain admired and, when the chips were down, trusted him. He was the golden boy of the post-war world, and we mourn him as a friend.
This article was first published in the NS of 29 November 1963. It appears in “The New Statesman Century”, an anthology of some of the finest writing from the first 100 years of the NS, available in selected WHSmiths and online:
The wax likness of former US President John Fitzgerald Kennedy stands on June 24, 2013 in front of the town hall of Berlin's Schoeneberg district, where he held his famous speech 'Ich bin ein Berliner' on June 26, 1963 to underline the support of the Unit

This article first appeared in the 12 August 2013 issue of the New Statesman, What if JFK had lived?

Show Hide image

How nature created consciousness – and our brains became minds

In From Bacteria to Bach and Back, Daniel C Dennett investigates the evolution of consciousness.

In the preface to his new book, the ­philosopher Daniel Dennett announces proudly that what we are about to read is “the sketch, the backbone, of the best scientific theory to date of how our minds came into existence”. By the end, the reader may consider it more scribble than spine – at least as far as an account of the origins of human consciousness goes. But this is still a superb book about evolution, engineering, information and design. It ranges from neuroscience to nesting birds, from computing theory to jazz, and there is something fascinating on every page.

The term “design” has a bad reputation in biology because it has been co-opted by creationists disguised as theorists of “intelligent design”. Nature is the blind watchmaker (in Richard Dawkins’s phrase), dumbly building remarkable structures through a process of random accretion and winnowing over vast spans of time. Nonetheless, Dennett argues stylishly, asking “design” questions about evolution shouldn’t be ­taboo, because “biology is reverse engin­eering”: asking what some phenomenon or structure is for is an excellent way to understand how it might have arisen.

Just as in nature there is design without a designer, so in many natural phenomena we can observe what Dennett calls “competence without comprehension”. Evolution does not understand nightingales, but it builds them; your immune system does not understand disease. Termites do not build their mounds according to blueprints, and yet the results are remarkably complex: reminiscent in one case, as Dennett notes, of Gaudí’s church the Sagrada Família. In general, evolution and its living products are saturated with competence without comprehension, with “unintelligent design”.

The question, therefore, is twofold. Why did “intelligent design” of the kind human beings exhibit – by building robotic cars or writing books – come about at all, if unintelligent design yields such impressive results? And how did the unintelligent-design process of evolution ever build intelligent designers like us in the first place? In sum, how did nature get from bacteria to Bach?

Dennett’s answer depends on memes – self-replicating units of cultural evolution, metaphorical viruses of the mind. Today we mostly use “meme” to mean something that is shared on social media, but in Richard Dawkins’s original formulation of the idea, a meme can be anything that is culturally transmitted and undergoes change: melodies, ideas, clothing fashions, ways of building pots, and so forth. Some might say that the only good example of a meme is the very idea of a meme, given that it has replicated efficiently over the years despite being of no use whatsoever to its hosts. (The biologist Stephen Jay Gould, for one, didn’t believe in memes.) But Dennett thinks that memes add something important to discussions of “cultural evolution” (a contested idea in its own right) that is not captured by established disciplines such as history or sociology.

The memes Dennett has in mind here are words: after all, they reproduce, with variation, in a changing environment (the mind of a host). Somehow, early vocalisations in our species became standardised as words. They acquired usefulness and meaning, and so, gradually, their use spread. Eventually, words became the tools that enabled our brains to reflect on what they were ­doing, thus bootstrapping themselves into full consciousness. The “meme invasion”, as Dennett puts it, “turned our brains into minds”. The idea that language had a critical role to play in the development of human consciousness is very plausible and not, in broad outline, new. The question is how much Dennett’s version leaves to explain.

Before the reader arrives at that crux, there are many useful philosophical interludes: on different senses of “why” (why as in “how come?” against why as in “what for?”), or in the “strange inversions of reasoning” offered by Darwin (the notion that competence does not require comprehension), Alan Turing (that a perfect computing machine need not know what arithmetic is) and David Hume (that causation is a projection of our minds and not something we perceive directly). Dennett suggests that the era of intelligent design may be coming to an end; after all, our best AIs, such as the ­AlphaGo program (which beat the human European champion of the boardgame Go 5-0 in a 2015 match), are these days created as learning systems that will teach themselves what to do. But our sunny and convivial host is not as worried as some about an imminent takeover by intelligent machines; the more pressing problem, he argues persuasively, is that we usually trust computerised systems to an extent they don’t deserve. His final call for critical thinking tools to be made widely available is timely and admirable. What remains puzzlingly vague to the end, however, is whether Dennett actually thinks human consciousness – the entire book’s explanandum – is real; and even what exactly he means by the term.

Dennett’s 1991 book, Consciousness Explained, seemed to some people to deny the existence of consciousness at all, so waggish critics retitled it Consciousness Explained Away. Yet it was never quite clear just what Dennett was claiming didn’t exist. In this new book, confusion persists, owing to his reluctance to define his terms. When he says “consciousness” he appears to mean reflective self-consciousness (I am aware that I am aware), whereas many other philosophers use “consciousness” to mean ordinary awareness, or experience. There ensues much sparring with straw men, as when he ridicules thinkers who assume that gorillas, say, have consciousness. They almost certainly don’t in his sense, and they almost certainly do in his opponents’ sense. (A gorilla, we may be pretty confident, has experience in the way that a volcano or a cloud does not.)

More unnecessary confusion, in which one begins to suspect Dennett takes a polemical delight, arises from his continued use of the term “illusion”. Consciousness, he has long said, is an illusion: we think we have it, but we don’t. But what is it that we are fooled into believing in? It can’t be experience itself: as the philosopher Galen Strawson has pointed out, the claim that I only seem to have experience presupposes that I really am having experience – the experience of there seeming to be something. And throughout this book, Dennett’s language implies that he thinks consciousness is real: he refers to “conscious thinking in H[omo] sapiens”, to people’s “private thoughts and experiences”, to our “proper minds, enculturated minds full of thinking tools”, and to “a ‘rich mental life’ in the sense of a conscious life like ours”.

The way in which this conscious life is allegedly illusory is finally explained in terms of a “user illusion”, such as the desktop on a computer operating system. We move files around on our screen desktop, but the way the computer works under the hood bears no relation to these pictorial metaphors. Similarly, Dennett writes, we think we are consistent “selves”, able to perceive the world as it is directly, and acting for rational reasons. But by far the bulk of what is going on in the brain is unconscious, ­low-level processing by neurons, to which we have no access. Therefore we are stuck at an ­“illusory” level, incapable of experiencing how our brains work.

This picture of our conscious mind is rather like Freud’s ego, precariously balan­ced atop a seething unconscious with an entirely different agenda. Dennett explains wonderfully what we now know, or at least compellingly theorise, about how much unconscious guessing, prediction and logical inference is done by our brains to produce even a very simple experience such as seeing a table. Still, to call our normal experience of things an “illusion” is, arguably, to privilege one level of explanation arbitrarily over another. If you ask me what is happening on my computer at the moment, I shall reply that I am writing a book review on a word processor. If I embarked instead on a description of electrical impulses running through the CPU, you would think I was being sarcastically obtuse. The normal answer is perfectly true. It’s also true that I am currently seeing my laptop screen even as this experience depends on innumerable neural processes of guessing and reconstruction.

The upshot is that, by the end of this brilliant book, the one thing that hasn’t been explained is consciousness. How does first-person experience – the experience you are having now, reading these words – arise from the electrochemical interactions of neurons? No one has even the beginnings of a plausible theory, which is why the question has been called the “Hard Problem”. Dennett’s story is that human consciousness arose because our brains were colonised by word-memes; but how did that do the trick? No explanation is forthcoming. Dennett likes to say the Hard Problem just doesn’t exist, but ignoring it won’t make it go away – even if, as his own book demonstrates, you can ignore it and still do a lot of deep and fascinating thinking about human beings and our place in nature.

Steven Poole’s books include “Rethink: the Surprising History of New Ideas” (Random House Books)

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times