Children in Jerusalem look at a brochure with images of the Holocaust, 1961. Photograph: Erich Hartmann/Magnum Photos
Show Hide image

The ever-changing face of Holocaust studies

The road to ruin.

At meetings across the country on Holocaust Memorial Day, worthies intoned the “lessons of the Holocaust” and warned that we must “learn from the past”. But ask most historians and they will blanche at the thought of anything as static or as simple as “lessons”. Over the past five decades, “Holocaust studies” have altered almost beyond recognition and explanations for what occurred have changed significantly.

In the 1950s, most people regarded the Third Reich as a criminal regime that had been run by crazed sadists. Nazi anti-Semitism, it was thought, had been a device to distract the masses. And it was widely believed that few Germans or inhabitants of conquered countries had sympathised with the assault on the Jews. As for the Jews themselves, they had gone to the gas chambers like lambs to the slaughter.

This narrative was both a legacy of the Nuremberg trials and a convenient fiction used to justify Cold War alliances and enmities. At Nuremberg, the surviving “top Nazis” took the fall for the crimes of the regime. Former Axis powers or belligerents now within the Nato fold were presented as having been unwilling or unwitting accomplices of the Nazis.

The first crack in this facade came with the trial of Adolf Eichmann in Jerusalem in 1961- 62. The Israeli authorities orchestrated the hearings to present every dimension of Jewish life under Nazi rule, with the emphasis on forms of resistance. They arranged for Nazi collaboration to be exposed, while “bystanders”, particularly the Allied powers and the Vatican, were shamed by evidence of their inaction.

However, the impact of the trial was shaped most decisively by the reporting of Hannah Arendt, who wrote about it for the New Yorker. She saw in Eichmann a living vindication of her earlier analysis of totalitarianism. His unthinking obedience was the reflex of totalitarian man, the “banality of evil”. Arendt’s (erroneous) description of Eichmann’s character irritated historians who detected rather more ideology and animosity in his conduct. And she provoked outrage with her claim that the Jewish leadership had colluded in their own destruction.

This allegation was strongly influenced by Raul Hilberg’s monumental study The Destruction of the European Jews (1961). Hilberg, whose Jewish family had fled Austria after the Anschluss, disparaged survivor testimony and drew almost exclusively on German documentation. But in the German record the Jews were always portrayed as outwitted and complaisant. Consequently, Hilberg’s work generated the impression of a bureaucratic machine that crushed hapless, silent victims.

The controversy that Hilberg’s book aroused marked the birth of what today we call “Holocaust studies”. Yad Vashem in Jerusalem, Israel’s official Holocaust memorial and museum, became an engine of research. During the 1970s and 1980s, Isaiah Trunk, Israel Gutman, Shmuel Krakowski, Dov Levin and Yitzhak Arad, all of whom had endured the ghettos and the camps, or else had fought as partisans, published histories of Jewish life under Nazi rule, with an emphasis on eastern Europe and varieties of resistance. The stereotype of Jews passively accepting their fate was shattered forever.

Meanwhile, German scholarship (mainly in West Germany) was galvanised by the trial of Auschwitz personnel that began in Frankfurt in 1963. And by the 1970s a division of labour had emerged: Israeli and Jewish historians wrote about the victims; the Germans inquired obsessively into the structure and functions of the Nazi state; while the Americans took a broader approach.

In 1970, the American historian Karl A Schleunes published The Twisted Road to Auschwitz, a pioneering work that challenged the idea that there was a direct route from Mein Kampf to the Final Solution. Schleunes argued that anti-Jewish policy was poorly developed when the Nazis came to power and jostled with other priorities. The regime, he insisted, had “stumbled” into genocide.

Through the 1970s and early 1980s research circled with increasing sterility around a narrow range of questions, drawing on the same limited range of sources. Had Hitler always intended to annihilate the Jews or did he drift into a murderous policy? Was there a single “Führer order” and if so, when was it issued? Was the genocide the result of planning or the consequence of “cumulative radicalisation”?

The principal figures in these exchanges were mostly West Germans: Eberhard Jäckel, a proponent of the “intentionalist” interpretation; Uwe Dietrich Adam, who followed Schleunes in arguing that the regime had lurched from one policy to another with no clear goal; and Martin Broszat, who exemplified the “functionalist” approach. It was an American, Christopher Browning, who blended the functionalist interpretation, in which human agency was downplayed, with a greater sensitivity to ideology and personality.

In 1982, the election of Helmut Kohl as West German chancellor opened the way to a controversial reassessment. Kohl wanted to “normalise” German history, treating the Nazi years as a phase in the longue durée of modernisation, and subsuming the Holocaust into a century of genocide. This agenda, and the efforts of Andreas Hillgruber and Ernst Nolte to tell a patriotic national story, triggered the “Historikerstreit”, a dispute about the singularity of Nazi crimes.

Kohl’s subsequent attempt to embrace the East Germans, following reunification, as victims of an undifferentiated totalitarianism that had lasted from 1933 to 1990 stimulated comparative studies. These in fact tended to underscore the specificity of Nazism. More significantly, the end of the Cold War allowed access to previously closed archives in the old Soviet bloc and enriched the corpus of available source material.

German reunification raised other unfinished business, such as the disposal of looted gold recovered from the Nazis in 1945. Swiss banks and German corporations, insurance firms, the art market and even railways were soon the subject of industrial-scale historical research by specially commissioned teams under the leadership of established scholars.

The resulting studies transformed the historical landscape. As Götz Aly concluded in Hitler’s Beneficiaries (2005), the transfer of wealth from Jews to Germans widened the circle of complicity to almost every German citizen. A similar dynamic extended across Europe and was summed up by Jan Gross in his recent book Golden Harvest. From France to Poland, non-Jews saw Jews as fair game, to be squeezed and then disposed of.

Meanwhile, explanations for the genocide were reshaped, first by postmodern theorists and then by the resurgence of national hatreds and ethnic cleansing unleashed by the collapse of communism.

In 1989, Zygmunt Bauman published Modernity and the Holocaust, in which he maintained that Nazi genocide was the apogee of Enlightenment rationality. Shortly afterwards, Michael Burleigh and Wolfgang Wipperman’s The Racial State sought to show how racial-biological thinking informed all official policy and infused everyday life in the Third Reich. The role of doctors, psychiatrists and demographers in applying eugenic ideas seemed to corroborate Bauman’s dark version of modernity.

Yet it was hard to think in such terms when the news was delivering images of slaughter from Bosnia and Rwanda. In Ordinary Men (1992), his study of a reserve police battalion that murdered tens of thousands of Jews in Poland, Browning had tilted in favour of situational factors such as peer pressure to explain the killers. By contrast, Daniel Goldhagen, whose book Hitler’s Willing Executioners (1996) examined the same cadre, concluded that they were driven by crude Judaeophobia. In his version the killers revelled in tormenting Jews before killing them in ways far removed from the industrial mass murder conjured up by Bauman.

By the end of the 1990s, personal agency and beliefs had become central to explaining both “perpetrators” and “bystanders”. To some extent, this reflected a shift from German to American scholarship. Robert Gellatelly, Eric Johnson and Peter Fritzsche argued that the Third Reich had relied less on coercion and more on consent. The Nazi concept of an idealised people’s community, was no longer dismissed as propaganda.

A new generation of young German historians produced a number of studies that amended our understanding of the timing and character of the Final Solution. While Hitler’s role remained decisive, it became apparent that his minions and satraps had far more autonomy than was once thought.

Jewish historians had long bemoaned the absence of a Jewish dimension from such research and the availability of vast collections of testimony, notably the USC Shoah Foundation, rendered the omission ever more untenable. But how to use it? Saul Friedländer’s magnificent volumes on Nazi Germany and the Jews (1997 and 2007), finally offered a model of how to write an “integrated” history that combined the conduct of the perpetrators with Jewish responses.

Fifty years after Arendt and Hilberg ruffled feathers, the “lessons of the Holocaust” seem no clearer and efforts to comprehend the Jewish tragedy continue to provoke as much controversy as reflection.

David Cesarani is research professor in history at Royal Holloway, University of London

This article first appeared in the 11 February 2013 issue of the New Statesman, Assange Alone

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge