Given the amount of interest and comment that my profile of Sam Harris has attracted, I thought it’d be useful to post the complete and unedited transcript of my conversation with him. The interview took place on 11 April, at the headquarters of Random House in London.
I’m particularly interested in the relationship between science and religion, and obviously it’s an enduring preoccupation of yours. You’ll know that Martin Rees was just awarded and accepted the Templeton Prize, and you have a bit to say about Templeton in your new book, The Moral Landscape. What do you make of that?
Well, it seems to be a cagey and successful choice from their point of view. He’s certainly not who you’d expect to be shilling for the cause. To my knowledge he is on the record as being a non-believer, but is, to my eye, too politic for his own good, or for our common good. But that allowed him to accept the prize without any qualms. I’ve seen one interview in the aftermath of his acceptance where he seemed quite tongue-tied in how he made sense of it. He – as a statement of just, political good will – he thinks science should not be in the business of criticising religion, and that scientists can do their job perfectly happily without ever coming up against some zero-sum contest against religion. I think that’s fundamentally untrue, but many scientists hope that’s true and act as though that’s true.
I guess the reason you think that is that for you, despite what certain secularists might say, religion involves making factual claims about the nature of reality.
Yeah – I just think that’s indisputable, apart from the fact that you can get many people who claim to be religious, but when you push, they are then loath to make any claims about what they actually believe. So there are many believers who are attached to the culture; they’re attached to the buildings; they’re attached to the art, they want to meet with that particular group – and yet they spend almost no time at all thinking about what, if anything, is true, in the doctrine. That, I would argue is just not really religion. Every religion contains propositional claims about certain events that happen in history, certain events that will happen in the future.
One reply to that might be to say that that’s simply a stipulative definition.
Well, there’s no other honest reading of the books. Religion may be too broad a category, but if you take what religion means in the West – Judaism, Christianity or Islam – we’re talking about some books. The only reason anyone can wake up in the morning thinking that Jesus even existed is because we have the New Testament, right? So you look at the New Testament. It makes a variety of claims that are by definition at odds with what we know to be scientifically plausible. And if you’re going to make the move of saying “well, none of these are really claims, this is just a story, this is just literature,” then you’re reading the New Testament the way we read the Iliad and the Odyssey, and then you have no religion of Christianity; you have, at best, mythology. You have art – which is what I think you should have; this is how we should read these books. And certainly some parts of the Bible should qualify as great literature.
In fact, Richard Dawkins wrote a piece for the New Statesman at Christmas praising the King James Bible precisely as a work of literature.
And we would have no problem if everyone read these books the way we all read Shakespeare. There are no wars being fought over rival interpretations of King Lear. First of all, there is simply no version of Islam that anything like fits the description of true moderation that we see in Christianity and Judaism in the West. There’s no version of Islam wherein you can say: “It’s just a book written by people, we just happen to love the tradition and value our identity with it”. That’s just a non-sequitur in a Muslim context. I think in America it certainly is in a Christian context as well.
Let’s talk a bit about the central argument of this book and then come back to religion. Your fundamental claim is that moral questions are in fact questions about the fundamental well-being of conscious creatures, and it follows from that that morality can be scientifically grounded. You go on to make another claim, which I think is equally controversial, but I wonder whether it’s more precarious, which is that cultural variations in conceptions of human flourishing are themselves rooted in the human brain – is that right?
Well they have to be realised in the human brain. If I speak a sentence to you and you understand it, or you remember it, or it moves you in any way at all – that is to say the cash value of that sentence is in some change in the state of your brain. So if there is, let’s say, a sense of honour in Arab culture that you couldn’t possibly have if you were raised in Britain, and that sense of honour leads you to have completely different emotional responses to loss of face, say; if all that is true and culture is the lever that is responsible for the change between people, that still is in the end a statement about people’s brains. The brain has got to be doing it. Your thoughts, emotions, sensations, memories, perceptions – these are facts about the brain.
The brain is plastic?
Yeah. And there’s no question that that is true – culture is just one way of describing all of the environmental intrusions into a person’s nervous system – a disproportionate number of which have been very early. There’s the culture of your birth – you’re getting this, quite literally, with your mother’s milk – and it is affecting your brain. No question.
As I understand it, you think that a preoccupation with the fact of cultural variation distorts enquiries into the nature of morality. You’re thinking, in particular, about evolutionary scientists – you talk about Jonathan Haidt and we might also mention Marc Hauser. They start from the fact of cultural variation, and look for the trans-cultural, universal principles of “moral grammar”. But you think that’s the wrong way to go, right?
I think there are universal principles that we should want to understand, but that are not necessarily good for us. We could recognise universal propensities which current cultures can’t fully eradicate, which we would want to eradicate if we could. Let’s say, a tendency for tribal violence. Or racism. Let’s say all cultures are, at bottom, slightly xenophobic, or greatly xenophobic, and there are just degrees of the problem. I think our only reasonable goal now is to try and build a global civilisation that can allow 9 billion people ultimately to flourish, so xenophobia is something that we want to get rid of. So the question for science is: what’s the optimum way to raise children, create political institutions and to cooperate with one another so as to mitigate the congenital problem we all have with xenophobia, regardless of culture?
The point of interest for me is what happens the moment you grant the well-being of conscious creatures – in this case human beings – is tied to truths about the way the universe is. Then you have to grant that there are going to be right and wrong ways to navigate this space of possible experience. Then there will be cultures that will, by any reasonable definition, look pathological, because you could in principle find a culture that was worse than any other; you could rank order all of your values, and you could find a culture that was just not the best culture for maximising any of those values.
And that’s an impolitic thing to say; that’s the antithesis of multiculturalism. But we should be aware: as a default setting, respect for other cultures and tolerance of diversity I think is a very wise principle. We’ve suffered a lot based on the opposite orientation. Tolerance, openness to argument, openness to self-doubt, willingness to see other people’s points of view – these are very liberal and enlightened values that people are right to hold, but we can’t allow them to delude us to the point where we can’t recognise people who are needlessly perpetrating human misery.
And on what grounds do we defend those values of tolerance and openness?
Well, I think we defend them as the most plausible bases for human flourishing. So free speech – take these recent examples: the controversy where someone burns a Quran in the United States and then that precipitates riots in Afghanistan and people are murdered. It seems to me that free speech has to win there – caring more about the Quran than human life is the pathological part, not the burning (while it’s rude and almost certainly unnecessary, and the person who did it was himself a religious maniac of a different flavour ). We’re right to say that a culture that can’t tolerate free speech is…there are a wide range of positive human experiences that are not available in that culture. And we’re right to want those experiences.
But that’s an empirical claim for you, isn’t it? You would say that it’s just a matter of fact that human lives go better rather than worse in cultures where openness and tolerance obtain.
Yeah, by any reasonable definition of better. You could give me an unreasonable definition of better. You could say: “well better is just there are fewer people on the streets. That’s my main value. I just want to look out and see the empty sidewalks”. Then, North Korea might win the rank ordering of best cultures there, but that’s just not. You really have to keep pushing on someone’s stated value. I think sensible people are going to converge on four values. There are cultures that strike me as truly pathological in that the majority of people converge on things that, I think, [people] clearly shouldn’t converge on. I think the notion of honour, and the view of women as being the property of men in their lives is something that we see throughout the Muslim world – though not exclusively there – and is something that should look problematic to us, and it does, and we’re right to see it that way. We don’t have to apologise for that. And we certainly don’t have to concede that it’s just our preference versus their preference and there’s just no way one preference can trump another.
Kwame Anthony Appiah wrote a book recently trying to recuperate the notion of honour. But you think it’s unrecoverable, morally speaking?
Well, unfortunately I haven’t read his book yet, though I’ve heard him talk. I think there may be quite healthy uses of honour, aspects of honour, that we have lost and we’re suffering as a result of that loss, but that again is an empirical claim. It’s a claim about what human life is like given these differences – and the problem we’ll always run in to is that we’re talking about counter-factual worlds, we don’t know what our life would be like if it were different. And it’s hard to run the experiment. You can make changes to yourself, or to your society, but we don’t know what it would have been like had you not made those changes. But that’s a pragmatic problem. Those kinds of issues don’t nullify the claim I’m making about there being a right answer.
As you just said, you think a degenerate form of liberal toleration has hobbled the West in what you call its “generational war against radical Islam”. But of course, taking the side that you do in that conflict puts you on the same side, in certain cases, as Christian religious fundamentalists.
I wonder how delicate you find the politics of that, and whether it makes you uncomfortable?
Yeah, it’s very inconvenient, certainly. And that’s what worries me. It scares me. That’s really one of the main motivations for writing the current book. I’m worried that the smartest, most secular people – and therefore in my eye, the people who should be most clear-headed in the face of religious evil – are the people who have lost their moral clarity and their moral courage in the face of religious evil.
I’m very worried that we could wake up in a world in where the only people who are clear about Islam are religious demagogues of the opposite camp, and to some degree that’s true in America. The liberal discourse about Islam and the United States is scarily detached from the reality of the doctrine, and there are many so-called moderate Muslims in the US that cynically manipulate that wishful thinking. There are groups like CAIR, the Council on American- Islamic Relations, that are, to my eye, just stealth Islamist PR firms. There are [only] glimpses of this because it’s hard to really expose duplicity. But the way Ayaan Hirsi Ali has been treated in America by the liberal establishment, I think, is scandalously awry in moral terms. To not recognise her as a success story – an enlightenment success story – someone who came out of a circumstance of true religious oppression and to an astonishing degree, equipped herself with the tools of civilisation and is now bearing witness to what she escaped … She is in many cases vilified by liberals as a right-wing demagogue and a racist …
Are you arguing, then, not only that there are, just as a matter of fact, no “moderate Muslims”, but there could in principle be no such thing as “moderate” Islam?
Well, no. There could, but it would be as self-contradictory as moderate Christianity is now. But as a sociological fact, we could have it. And I’m not saying there are no moderate Muslims – there are effectively millions and millions and millions of moderate Muslims in the world. Who knows what Muslims don’t really read the Quran with any attention, don’t think about whether apostates should be killed (if you ask them they say “no of course not that’d be horrible”), and yet one important distinction now is that [there is] no viable school of Islam that is analogous to and is [as] benign as reformed Judaism, say. The penalty for apostasy is death, and the best you can get is to find people who don’t care to enforce it, or think that its enforcement must come after some laborious process that no-one is willing to engage. But you can’t find a school of Islam – I hope I’m wrong about this, but as far as I can tell this is true – you can’t find a school of Islam that, based on its theology, [believes that] there should be no penalty for apostasy. The other problem is that the Quran is a much smaller and more unified book. Christians and Jews, based on the sheer size and self-contradictory nature of the Bible, are able to just cherry-pick in a way that is much harder in Islam.
Back to the central thesis of The Moral Landscape for a moment. You’ve put a notion of well-being at the centre of your account of morality and moral truth. So where the utilitarians put pleasure, you put well-being. You’re committed to a form of consequentialism, essentially. And it seems that you’re notably relaxed about some of the implications of that.
You have something in mind?
I’m thinking about torture. You’re willing to bite certain bullets when it comes to questions like torture, aren’t you? Have you revised your position since The End of Faith?
No. My position is… I have a page on my website entitled “response to controversy”, so if you want to see my latest position, I keep that updated, because I’ve gotten a fair amount of grief for what I wrote in The End of Faith about torture. I’m not in any sense pro-torture. The argument I give there is that of torture and collateral damage. Collateral damage always looks worse. Yet to even talk about torturing Osama Bin Laden – even to the point of saving the lives of dozens of little girls – it would be a non-starter politically.
We can’t even talk about torture. I was trying to line up the ethical contradictions there as I saw them, but yeah, I think the reason to be against torture – and this is the reason to be against any patently unethical behaviour – is based on its consequences in the lives of human beings. You can make the argument that tolerating torture in any instance – even if we have a law which says, “we’ll only torture someone we know to be a terrorist, who claims to be a terrorist, and who claims to have current knowledge of some coming atrocity” – even in that case, performing torture, knowing that there are people you are delegating to do this, is so corrosive of what we value in our society that it’s not worth doing in any circumstance. Now, I think that the truth is that’s probably untrue, given that something like nuclear terrorism is possible. If you get someone who you know is a member of al-Qaeda, and you know they have nuclear materials, and and they claim to have knowledge, then you have the perfect ticking time bomb situation. The idea there that you have a moral duty to keep this person perfectly comfortable with three meals a day and adequate sleep etc …
But it’s vanishingly unlikely that we’d ever find ourselves in such a situation.
No, I don’t think that’s true. I think people undersell how often situations are analogous to that. To be willing to talk about it is to many people’s minds to just confess your rudderless morality. Because it’s just so synonymous with evil. The idea that you would ever be willing to think about how you would be willing to have a torture policy.
Do you not think that there are some things we just ought not to do to other human beings?
Yeah, but our intuitions can be pushed round. Consider a thought experiment: what if killing an innocent little girl would deliver a cure for cancer. Would you do it? Well it seems like a starkly horrible thing to do – in principle we would never do it. But of course, we kill thousands of little girls every year, given other policies. What should the speed limit be? We’re not going to tolerate a speed limit of ten kph – but every time you raise the speed limit, you’re going to kill little girls. And you’re doing it based purely on your own desire to drive faster. So we make these cost-benefit analyses based on the value of human life all the time – but when posed in this situation – here’s the particular little girl you’re going to kill…
But surely there’s a difference between directly intending to kill a little girl to bring about desirable outcomes, and introducing a policy that one can foresee might have such a consequence, even though we don’t directly intend her death?
Right, but you can keep finessing the examples so that your preference for one over the other becomes genuinely inscrutable. I think probability has a lot to do with it – if you distribute the risk. If you impose a risk of one to a hundred, on a hundred little girls, that somehow seems better. And if you impose it on a whole society – one in six billion – then all of medical research is essentially taking that risk. The fact that it’s not an identifiable person is, I think, the crucial variable there, but I think we’re overly callous with regards to the almost certain consequences of our actions when they’re not strictly intended – as in the case of collateral damage. We know we’re going to kill innocent people, and having killed them, we don’t really think about it. That seems to me shockingly callous. One thing I link to from that torture discussion on my website is when we killed I think it was Zarqawi with a missile strike, we killed something like 12 other people at the same moment – and it was just reported on as a success. The other people weren’t even an afterthought, really. We killed his mother-in-law, and whoever else was standing next to him. But if we had tortured his mother-in-law – water-boarded his mother-in-law to find his whereabouts, which is much less extreme than blowing her to bits … Christopher Hitchens volunteered to get water-boarded: it’s something you can do and survive and not be destroyed by.
But he recognised it as inhuman treatment.
But he would rather be waterboarded than blown up. So we have a situation where we’re blowing people up, and it doesn’t strike us as morally obscene. And yet, waterboarding those same people, which they presumably would have preferred, is something that’s unthinkingly callous. And that’s a contradiction, I think, that is intellectually and morally unsustainable. But if you ask me what our policy on torture should be, I think it should be illegal. I think we should say we don’t torture, it’s illegal, there are good reasons never to do it. Yet I can well imagine an interrogator being in a situation where clearly the ethical thing to do is to make someone uncomfortable until they talk.
I say somewhere in The End of Faith that if you can’t imagine any situation in which depriving someone of sleep, playing loud music, water-boarding them – doing something which leaves no lasting physical damage other than making them exquisitely uncomfortable for the moment so that they talk – if you can’t imagine a situation in which you’d be willing to do that or sanction that, then you’re just not thinking hard enough. There are people who are intending to destroy the lives of millions, render cities uninhabitable – that’s what’s scary, frankly. I mean, I’m a liberal through and through, but the idea that we could get to a moment in history where only our crazy demagogues can seem to recognise when there’s a threat – I don’t want to wake up for an election in the US thinking only this crazy conservative who I disagree with in every other point and who denies the truth of evolution, only he would be strong enough to defend civilisation against its genuine enemies. But there’s something about liberal discourse which allows for that possibility.
What you’ve just described there is a kind of error about the way in which liberals ought to hold their most fundamental commitments. These aren’t just subjective preferences; for you, there are such things as moral truths.
Yeah, somebody can be right, and somebody can be wrong. Or more right and more wrong.
You describe a familiar rejection of the idea of moral truth, or the possibility of cross-cultural moral judgement, which entails the claim that science is no help when it comes to the wrong questions. But what’s interesting to me is that a certain kind of scientific world-view often ends up in the same place – in a kind of moral subjectivism. You mention in passing in the book J L Mackie. Now Mackie is a very good example of someone who has a scientific, materialistic world view, which leads him to a form of moral subjectivism. So what’s wrong with Mackie? Why, in the case of Mackie, does that scientific world view then end up in precisely the place you don’t want to be?
I think many scientists have an unjustifiably narrow view of the boundaries of science. The only point of contact between science and morality is given in an evolutionary account of how we come to have morality. So if you think that’s all science can do – if you think science can describe how apes like ourselves came to talk about morality and worry about things like trust – well it’s easy to see how this is a purely descriptive effort. Norms then are something else entirely – spooky things we can’t place in the world of physics, certainly, and we can’t quite place them in chemistry, and we can’t quite place them in biology.
Whereas for you the fact/value distinction is the whole problem, isn’t it?
Yeah, and I think science can engage in a different project: we can recognise that things like right and wrong, good and evil, relate to the experience of conscious creatures, and to nothing else – and the consciousness of creatures is itself a natural phenomenon that is constrained in some way by the laws of nature. So, granted it’s fantastically difficult to get down to the details and know you have right answers, just as it is in economics. Economics struggles to be a science, but just now we are blown about by uncertainty at every moment. But there’s no question that there are right and wrong answers – we’re talking about fantastically complicated systems, and here we’re talking about brains and societies and it’s all very complicated, but it’s not so complicated that you can’t recognise obviously wrong answers and obviously right answers.
I’m not sure the example of economics is the best one to choose. It might be argued that the crisis we’ve just been through and still are living through is in some sense a function of economists’ conviction that what they’re doing is science – that their models, those fantastically refined mathematical models, actually corresponded to the way things are. But it turns out they didn’t.
That’s what I’m saying -economics, if it’s a science, it’s a terrible science. The state of it is such that you can have total disagreement about what we should be doing now given the recent global economic catastrophe, about what is the best course of action. The fact that you can get as much disagreement as apparently you can get reveals that we don’t have much of a purchase on whatever principles we should have in order to not find ourselves in this situation again.
But the move you make – you make it several times in the book – is to say that we shouldn’t infer from the fact of disagreement that a science of economics is in principle impossible. Just as there’s no in-principle reason why there shouldn’t a science of morality.
Right, because there’s just no question that there are obvious wrong answers. We can’t be sure of the right answer in economics and morality – the right answer, the best of all possible answers – but we can recognise wrong answers. You can look at what people value and what that leads them to do the things that they do, and the consequences of those actions in the lives of their children, and their neighbours, and you can look at that entire context and you can say, “OK, that is clearly the wrong answer to a set of problems that human beings continue to confront”.
I’m interested in the relationship that’s implied here between science and philosophy. I’m intrigued to know where moral philosophy fits in. A move you make on occasion is to acknowledge a set of problems of the sort that moral philosophers tend to occupy themselves with. For example, early in the book you consider the problem of impartiality, whether it’s morally justifiable to show partiality to those closest to us. You acknowledge the problem, but then move on – as if you’re operating on a more fundamental level. You’re identifying the foundations of the possible science of morality. So the question arises for me whether there are resources in your account to deal with those difficult questions, or whether that would still be a job for moral philosophy? So one job for the moral philosopher would be to deal with those instances where fundamental values come into conflict.
Yeah, I think that the problem of there being trade-offs between fundamental values is a real one. I don’t think it’s going away, but, again, the fact that it exists doesn’t suggest that there are no right answers. This is why I think my analogy of the moral landscape is an improvement on run-of-the-mill moral consequentialism because it makes it intelligible that there could be peaks on this landscape that could be different in all kinds of interesting ways, but it wouldn’t be a morally salient difference.
I don’t think there’s an interesting boundary between philosophy and science. Science is totally beholden to philosophy. There are philosophical assumptions in science and there’s no way to get around that.