You can't learn about morality from brain scans

The problem with moral psychology.

This article first appeared on newrepublic.com

Joshua Greene, who teaches psychology at Harvard, is a leading contributor to the recently salient field of empirical moral psychology. This very readable book presents his comprehensive view of the subject, and what we should make of it. The grounds for the empirical hypotheses that he offers about human morality are of three types: psychological experiments, observations of brain activity, and evolutionary theory. The third, in application to the psychological properties of human beings, is necessarily speculative, but the first and second are backed up by contemporary data, including many experiments that Greene and his associates have 
carried out themselves.

But Greene does not limit himself to factual claims. He also asks how our moral beliefs and attitudes should be affected by these psychological findings. Greene began his training and research as a doctoral student in philosophy, so he is familiar from the inside with the enterprise of ethical theory conceived not as a part of empirical psychology but as a direct first-order investigation of moral questions, and a quest for systematic answers to them. His book is intended as a radical challenge to the assumptions of that philosophical enterprise. It benefits from his familiarity with the field, even if his grasp of the views that he discusses is not always accurate.

The book is framed as the search for a solution to a global problem that cannot be solved by the kinds of moral standards that command intuitive assent and work well within particular communities. Greene calls this problem the “tragedy of commonsense morality.” In a nutshell, it is the tragedy that moralities that help members of particular communities to cooperate peacefully do not foster a comparable harmony among members of different communities. 

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups).... As with the evolution of faster carnivores, competition is essential for the evolution of cooperation. 

The tragedy of commonsense morality is conceived by analogy with the familiar tragedy of the commons, to which commonsense morality does provide a solution. In the tragedy of the commons, the pursuit of private self-interest leads a collection of individuals to a result that is contrary to the interest of all of them (like over-grazing the commons or over-fishing the ocean). If they learn to limit their individual self-interest by agreeing to follow certain rules and sticking to them, the commons will not be destroyed and they will all do well. As Greene puts it, commonsense morality requires that we sometimes put Us ahead of Me; but the same disposition also leads us to put Us ahead of Them. We feel obligations to fellow members of our community but not to outsiders. So the solution to the tragedy of the commons has generated a new tragedy, which we can see wherever the values and the interests of different communities conflict, not only on an international scale but also more locally, within pluralistic societies that contain multiple moral communities.

To solve this problem Greene thinks we need what he calls a “metamorality,” based on a common currency of value that all human beings can acknowledge, even if it conflicts with some of the promptings of the intuitive moralities of common sense. Like others who have based their doubts about commonsense morality on diagnoses of its evolutionary pedigree, Greene thinks that this higher-level moral outlook is to be found in utilitarianism, which he proposes to re-name “deep pragmatism” (lots of luck). Utilitarianism, as propounded by Bentham and Mill, is the principle that we should aim to maximize happiness impartially, and it conflicts with the instinctive commonsense morality of individual rights, and special heightened obligations to those to whom one is related by blood or community. Those intuitive values have their uses as rough guides to action in many ordinary circumstances, but they cannot, in Greene’s view, provide the basis for universally valid standards of conduct. 

Greene’s argument against the objective authority of commonsense morality hinges on Daniel Kahneman’s distinction between fast instinctive thought and slow deliberative thought. As Kahneman shows, these two modes appear in almost every aspect of human life, and we could not survive without both of them. Greene says that they are like the two ways a contemporary camera can operate: by automatic settings or by manual mode. Automatic settings enable you to point and shoot, without thinking about the distance or lighting conditions, whereas manual mode enables you to make adjustments to the focus, the aperture, and the shutter speed after conscious reflection on the specific conditions of the shot. The availability of both of these options makes for either efficiency or flexibility, depending on what is needed. 

Our decision apparatus, according to Greene, is similar. When it comes to moral judgment—deciding whether an act would be right or wrong—we can be fast, automatic, and emotional, or slow, deliberate, and rational. Greene puts the distinction to work in his careful discussion of the trolley problem, a set of gruesome thought experiments that has become a staple of recent moral philosophy, associated in particular with the writings of Philippa Foot, Judith Jarvis Thomson, and Frances Myrna Kamm. As Greene says, the problem boils down to the following question:  

When, and why, do the rights of the individual take precedence over the greater good? Every major moral issue—abortion, affirmative action, higher versus lower taxes, killing civilians in war, sending people to fight in war, rationing resources in healthcare, gun control, the death penalty—was in some way about the (real or alleged) rights of some individuals versus the (real or alleged) greater good. The Trolley Problem hit it right on the nose. 

In the central case of the trolley problem, we are asked to compare two choices:

  • The footbridge dilemma: A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Standing next to you is a 300-pound man. The only way to save the five people is to push him off the footbridge and onto the tracks below. The man will die as a result, but his body will stop the trolley. (You are only half his size and would not stop the trolley if you yourself jumped in front of it.)
  • The switch dilemma: A runaway trolley is headed for five workmen who will be killed if nothing is done. You can save these five people by hitting a switch that will turn the trolley onto a sidetrack. Unfortunately there is a single workman on the sidetrack who will be killed if you hit the switch.

It turns out that most people the world over think that it would be wrong to push the fat man off the footbridge, but that it would be morally permissible to hit the switch—even though the outcomes of the two acts would be the same, one person killed and five saved. Other examples have been invented to refine the search for the determining characteristics that trigger a judgment of wrongness or permissibility, and various principles have been formulated to capture the results, but we need not go into those details here. The basic point for Greene’s purposes is that we have strong moral reactions against certain actions that cause harm but serve the greater good on balance, but not to other actions that produce the same balance of good and harm.

There are two noteworthy differences between the two dilemmas. First, in “switch” there is nothing mysterious about the result; everyone gets the point of choosing the outcome with fewer deaths. As Greene observes, “No one’s ever said, ‘Try to save more lives? Why, that never occurred to me!’ ” But in “footbridge” the choice, however convincing, is mysterious; it seems to call for, but also to defy, explanation. What is it about pushing the fat man in front of the trolley that overrides the value of the five lives that would be saved? To say that it would violate his right to life, or that it would be murder, seems to repeat rather than to explain the judgment. 

Second, the response to “footbridge” has an emotional charge that is missing in the allegedly more rational response to “switch.” You can consult your own visceral reaction to the idea of pushing someone in front of a trolley, as opposed to your feeling about hitting the switch when you know that there is someone on the sidetrack. But Greene and his colleagues have added multiple studies, using brain imaging, to show that when people contemplate footbridge-type cases there is increased activity in the ventromedial prefrontal cortex, a part of the brain associated with emotion, whereas switch-type cases elicit increased activity in the dorsolateral prefrontal cortex, a part of the brain associated with calculation and reasoning. Moreover, people with damage to the ventromedial prefrontal cortex, who lack normal emotions, were five times as likely as others to approve of pushing the fat man off the bridge.

Greene offers much more experimental detail and some ingenious psychological proposals about why our gut reactions have the particular subtle contours that they do, but his overall conclusion, following Kahneman, is that we have a dual-process system of moral judgments: automatic settings charged with emotion and deliberative responses that depend on calculation. These two types of response will conflict in some cases, but he thinks both have their uses in the guidance of human behavior. As Greene says, “We wouldn’t want to blindly condemn our moral intuitions with ‘guilt by neural association.’ ” Still, the metaphor of camera settings and the appeal to evolutionary explanations for the automatic settings imply that Greene accords utilitarian values (minimizing the number of deaths) a different status from the kind of prohibition we find in “footbridge.” He believes that although we cannot get rid of our visceral responses and in general should not want to get rid of them, we can distance ourselves from them in a way that we should not distance ourselves from our utilitarian judgments. Utilitarianism, he believes, allows us to transcend our evolutionary heritage. The question then is whether he offers a coherent account of how and why we should give it this authority.

Greene wants to persuade us that moral psychology is more fundamental than moral philosophy. Most moral philosophies, he maintains, are misguided attempts to interpret our moral intuitions in particular cases as apprehensions of the truth about how we ought to live and what we ought to do, with the aim of discovering the underlying principles that determine that truth. In fact, Greene believes, all our intuitions are just manifestations of the operation of our dual-process brains, functioning either instinctively or more reflectively. He endorses one moral position, utilitarianism, not as the truth (he professes to be agnostic on whether there is such a thing as moral truth) but rather as a method of evaluation that we can all understand, and that holds out hope of providing a common currency of value less divisive than the morality of individual rights and communal obligations. “None of us is truly impartial, but everyone feels the pull of impartiality as a moral ideal.”

Utilitarianism, he contends, is not refuted by footbridge-type intuitions that conflict with it, because those intuitions are best understood not as perceptions of intrinsic wrongness, but as gut reactions that have evolved to serve social peace by preventing interpersonal violence. Similar debunking explanations can be given for other commonsense moral intuitions, such as the obligation to favor members of one’s own group over strangers, or the stronger obligation one feels to rescue an identified individual who is drowning in front of you than to contribute to saving the lives of greater numbers of anonymous victims far away. According to Greene, it is understandable in light of evolutionary psychology that we have these intuitions, and for the most part it does no harm to let our conduct be guided by them, but they are not perceptions of moral truth, and they do not discredit the utilitarian response when it tells us to do something different.

While we cannot get rid of our automatic settings, Greene says we should try to transcend them—and if we do, we cannot expect the universal principles that we adopt to “feel right.” Utilitarianism has counterintuitive consequences, but we arrive at it by recognizing that happiness matters to everyone, and that objectively no one matters more than anyone else, even though subjectively we are each especially important to ourselves. This is an example of what he calls “kicking away the ladder,” or forming moral values that are opposed to the evolutionary forces that originally gave rise to morality.

Yet Greene cannot seem to make up his mind as to whether utilitarianism trumps individual rights in some more objective sense. When he tries to describe the appropriate place of utilitarianism in our lives, this is what he says:

It’s not reasonable to expect actual humans to put aside nearly everything they love for the sake of the greater good. Speaking for myself, I spend money on my children that would be better spent on distant starving children, and I have no intention of stopping. After all, I’m only human! But I’d rather be a human who knows that he’s a hypocrite, and who tries to be less so, than one who mistakes his species-typical moral limitations for ideal values. 

The word “hypocrite” is misused here. A hypocrite is someone who professes beliefs that he does not hold—but so far as I can tell Greene is accusing himself of failing to live in accordance with beliefs that he accepts, beliefs about ideal values. 

This implies something that is clearly not a fact of empirical psychology: namely, that there are values by which we should “ideally” govern our lives, and that they are captured by the utilitarian aim of maximizing total happiness, counting everyone’s happiness impartially as of equal value, with no preference for ourselves or our loved ones. Greene even offers an extravagantly philosophical argument in support of this ideal. He asks what you would do if you had the choice of creating a world full of people like us, or a world full of people whose natural motives were completely unselfish and impartial and who cared about everyone, not just their friends and families, as much as they cared about themselves. He assumes that you would choose to create the second species, and that this shows that there is something the matter with us and our species-typical moral responses. 

Greene apparently believes that this bizarre creationist thought-experiment allows us to identify ideal values, because it calls forth a faculty of value judgment that is not tainted by our “species-typical moral limitations.” He appears to think that the values that would animate this ideal species apply in some sense to us, even though we are very different. Yet he also believes that it would be unreasonable to expect us to live up to them, and disastrous to insist that we do so.

If it seems absurd to ask real humans to abandon their families, friends, and other passions for the betterment of anonymous strangers, then that can’t be what utilitarianism actually asks of real humans. Trying to do this would be a disaster, and disasters don’t maximize happiness. Humans evolved to live lives defined by relationships with people and communities, and if our goal is to make the world as happy as possible, we must take this defining feature of human nature into account.

Greene is wrestling with an old problem, and his psychological approach does not enable him to solve it. When we want to arrive at standards to govern conduct, our own and that of others, we have to start somewhere, and that means starting from what seems right. When our intuitions are unequivocal, we can simply accept them; but sometimes they are not, and then we are faced with a choice. Should we distrust our intuitions about individual rights when they conflict with the intuition that it is always better to save more lives, or should we abandon utilitarianism because it allows intuitively unacceptable violations of individual rights? Greene says that the intuitive reaction in “footbridge” is analogous to an optical illusion like the Müller-Lyer, in which two equal lines appear to be of different lengths because of a difference of context. The illusion does not go away even when we have measured the lines and found them to be equal. Yet the utilitarian calculation is not really like a physical measurement: it depends on a different form of evaluation, one which Greene describes as a human invention. 

One of the hardest questions for moral theory is whether the values tied to the personal point of view, such as partiality toward oneself and one’s family, and special responsibility for refraining from direct harm to others, should be part of the foundation of morality or should be admitted only to the extent that they can be justified from an impersonal standpoint such as that of impartial utilitarianism. To dismiss our counter-utilitarian attachments and intuitions, as Greene does, as “species-typical moral limitations,” which must be seen as obstacles to the realization of the moral ideal, is to identify ideal morality as something more, or perhaps less, than human. 

A more attractive alternative would be to combine some of the values that form a natural part of the personal point of view with universal and impartial values of the kind Greene believes that we are also capable of. A project of this kind would require more subtlety about the different possible interpretations of impartiality than Greene displays: he identifies impartiality with happiness-maximization, and his brief discussions of Kant and Rawls show that he does not really understand their alternative conceptions—though I suspect that even if he did, he would still reject them in favor of utilitarianism. 

Rawls’s main objection to utilitarianism is that it fails to make the distinction between persons a fundamental factor in the construction of the moral point of view, so it settles conflicts between the interests of distinct persons by a method of cost-benefit balancing that is identical with the method that is appropriate when there are choices to be made between goods and evils within the life of one individual. Thus in utilitarianism a very severe cost to one person can be outweighed by the sum of small advantages to a sufficiently large number of other people. Rawls, in the tradition of Kant, tried to work out an alternative form of impartial equal consideration for the interpersonal case, based on priorities of urgency which limited such interpersonal trade-offs. There is no space to discuss it here, but this is just one example of how the transcendence of our evolutionary heritage may be more complicated than Greene imagines.

The most difficult problem posed by Greene’s proposals is whether we should give up trying to understand our natural moral intuitions as evidence of a coherent system of individual rights that limit what may be done even in pursuit of the greater good. Should we instead come to regard them as we regard optical illusions, recognizing them as evolutionary products but withholding our assent? Greene’s debunking arguments add an empirical dimension to a venerable utilitarian tradition, but they certainly do not settle the question. It is possible to defend a universal system of individual rights as the expression of a moral point of view that accords to each individual a sphere of autonomy in the conduct of life, free from interference by others, defined in such a way that the same sphere of autonomy can be accorded to everyone without inconsistency. This last condition means that sometimes the distinction between what does and does not count as a crossing of the boundary protected by rights may seem arbitrary—as in the distinction between “footbridge” and “switch,” or between killing someone (strongly prohibited) and failing to save someone from death (permitted, unless it costs you very little). But the moral conception behind a system that embodies such distinctions cannot easily be dismissed as the equivalent of an optical illusion. Most people will regard a morality based entirely on such a system of equal liberty as unacceptable, but it can also be included, along with some requirements of impartial concern for the general welfare, as part of a more complicated morality that reflects the complexity of human nature.

Such disagreements are an inevitable part of the important enterprise of moral invention that Greene, along with many others, is engaged in. Humanity has, we may hope, a long road of moral development ahead of it.

Thomas Nagel is University Professor of Philosophy and Law, emeritus, at New York University and the author, most recently, of Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (Oxford).

This article first appeared on newrepublic.com

The artwork "My Soul" by Katharine Dowson, created using a brain scan. Photo: Getty
Evans/Three Lions/Getty Images
Show Hide image

Which companies are making driverless cars, and what are their competing visions for the future?

An increasing number of tech giants are populating the driverless car market. Where do each of them stand on ambition, innovation, and safety?

The driverless car has metamorphosed from a superfluous autonomous machine to the vehicle of choice for tech giants hoping to boast their technical prowess and visionary thinking.

The name of the Silicon Valley game has always been innovation, and the chance to merge quadruped hardware with self-regulating software has offered companies a new way to reinvent themselves and their visions. A new means by which to edge each other out in a race to the top of a Fritz Lang-style global metropolis, whose technocratic ruler would be the company capable of aligning their driverless transportation dreams with those of the public.

Racing quite literally out of the blocks in this race to showcase its driverless vehicles has been Uber. Having already expanded its operations as a taxi service from the streets of San Francisco to more than 300 countries worldwide, Uber went and pushed out its sample line of driverless cars in Pittsburgh last week.

Uber CEO Travis Kalanick has previously stated that the need for the company to delve into driverless cars is “basically existential”, which explains why Uber seems to be so keen to come out with a working model first. It’s a vision that seeks to cut the cost of ride-hailing by slashing the cost of human drivers, and hopes to offer a safer alternative for passengers who must place an unwarranted trust in a driver they’ve never met to shuttle them safely to their destinations.

Uber’s driverless cars are designed with Volvo, and currently require technicians at hand for potential intervention, but aims to phase these out. It has had the distinct advantage of analysing data from all the road miles made by Uber drivers so far. If Uber has its way, car ownership could be a thing of the past. Speaking to Reuters, an Uber spokesperson confirmed this, saying: “Our goal is to replace private car ownership.”

There are a number of issues at hand with Uber’s approach. The fleet of cars displayed in Pittsburgh was in fact not a fleet – there was a grand total of four for viewing, making it impossible to visualise how a fully-fledged system would work.

A more pressing issue is Uber’s timeframe: in comparison to other companies in the market, Uber is aiming for mass-market spread within a few years – far too soon according to experts who think that safety measures will be compromised, and adherence to future regulations avoided, as a result. Uber currently lacks an ethics committee, creating a grey area in determining what happens if one of these cars is involved in an accident.

Perhaps demonstrating even greater ambition, given its sheer dominance over the market, is Google. Taking on the challenge of autonomy and safety on busy city streets, Google seems to be well-equipped given its unrivalled mapping data.

First revealed in 2010, Google’s self-driving car project is expected to come into service sometime in the 2020s. Accidents and traffic could be a thing of the past, they say. Chris Urmson, who headed the project until recently, believes that these cars will work based on a positive feedback system, one which allows them to improve the more they are put into practice. As one car learns, every car will learn. Shared data means the rate of improvement for Google’s driverless cars will be exponential.

Showing no sign of a slow-up in its ambitions, Apple, a company which has found a way into the psyche of its acolytes, is thought to be getting involved in the cars of the future too. Links have been made between Apple and McLaren, with a £1.2bn acquisition rumoured. It would come as no surprise if Apple did this; its greatest successes came in convincing consumers that they needed their products, and a possible iCar could do the same.

A tamer approach to driverless cars is coming from the companies who identify themselves as automotive ones as opposed to tech ones. Tesla has led the pack with its driver-assist technology. Its Model S is “designed to get better over time”, using a “unique combination of cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic”.

Following the first death of a person in an autopilot mode Tesla Model S car in May this year, the media and consumers were quick to issue warnings over the safety of the Tesla autopilot mode. Though Tesla CEO Elon Musk was quick to offer his condolences to the family of Joshua Brown, the driver who crashed in the vehicle in Florida, he was firm in his insistence that Tesla was not to blame. Musk explained that this was the first documented death of a person in a Tesla on autopilot mode after an accumulative total of 130 million miles driven by its customers, whereas “among all vehicles in the US, there is a fatality every 94 million miles”.

When put into perspective, it’s clear to see how a paranoid hysteria surrounds the rolling out of driverless vehicles. Safety has always been one of the key proponents for their use; by removing the risk of human error, we are able to create a safer road environment, as highlighted by Musk.

Earlier this year, Ford launched Ford Smart Mobility – its start-up-styled initiative designed to encourage ride sharing. By creating a small subset team to work on the technology, Ford is safeguarding itself from unforeseeable failures with driverless cars by maintaining its production of normal ones. Its cars have had elements of automation introduced incrementally, such as implanted sensors that enable these cars to park themselves. Ford hopes to have some sort of ride-sharing service in action by 2021.

BMW, Volvo and Audi are taking the cautious road too. BMW is making use of GPS to chart safe routes for its cars. In comparison to Google’s mapping, BMW’s system seems much more primitive, suggesting that the pace of development is dictated by accessibility to technology beyond vehicles. Volvo focuses on safety too and hopes that Volvo cars will be involved in no accidents by 2020 due to automation.

As we enter a market in which the top tech companies will be meeting at crossroads in their driverless cars, competing visions and levels of ambition will create a new relationship of trust between consumers and driverless car producers. There is no doubt that driverless cars will be here to stay, our roads one day teeming with passengers who get to relax on the roads. Taking your hands off the wheel will eventually become the norm, but don’t expect to be free-wheeling worldwide for a while yet.