No, Jane Austen was not a game theorist

Using science to explain art is a good way to butcher both, and is intellectually bankrupt to boot.

This article first appeared on newrepublic.com

Proust was a neuroscientist. Jane Austen was a game theorist. Dickens was a gastroenterologist. That’s the latest gambit in the brave new world of “consilience,” the idea that we can overcome the split between “the two cultures” by bringing art and science into conceptual unity – which is to say, by setting humanistic thought upon a scientific foundation. Take a famous writer, preferably one with some marketing mojo, and argue that their work anticipates contemporary scientific insights. Proust knew things about memory that neuroscientists are only now discovering. Austen constructed her novels in a manner that is consistent with game theory. Bang, there’s your consilience.

There is only one problem with this approach: it is intellectually bankrupt. Actually, there are a lot of problems, as Michael Suk-Young Chwe’s abominable volume shows. If this is the sort of thing that we have to look forward to, as science undertakes to tutor the humanities, the prospect isn’t bright. Game theory is a method for modeling decisions, especially in contexts that involve a multiplicity of actors, in mathematical terms. One would think, given its title, that Chwe’s book offers an in-depth game-theoretical analysis of the ways that Austen’s characters (specifically, her heroines and heroes) work through their choices (specifically, the ones they make in relation to one another) – why Elizabeth Bennet, to take the most obvious example, rejects Mr Darcy the first time he proposes but accepts him on the next go-round.

No such luck. What we really get, once we fight through Chwe’s meandering, ponderous, frequently self-contradictory argument, is only the claim that Austen wants her characters to think in game-theoretic ways: to reflect upon the likely consequences of their choices, to plan out how to reach their goals, to try to intuit what the people around them are thinking and how they in turn are likely to act. But this is hardly news. Austen describes a world in which young ladies have to navigate their perilous way to happiness (that is, a rich husband they can get along with, or, more charitably, a man they love who happens to be wealthy) by controlling their impulses and thinking coolly and deliberately. Act otherwise and you end up like Lydia Bennet, yoked forever to the feckless Mr Wickham. That Austen is no D H Lawrence – that she believed that reason should govern our conduct – is pretty much the most obvious thing about her work.

But Chwe himself is not content with being reasonable. When he says that Austen was a game theorist, he means for us to take him at his word. Never mind the fact that game theory did not emerge until the middle of the twentieth century. Austen, he claims, was a “social theorist” who “carefully establishes game theory’s core concepts” and “systematically explored” them in her novels, which are “game theory textbooks.” This is a perfectly valid statement, as long as we ignore the accepted meaning of most of the words it contains. Chwe apparently saw the title of Proust Was a Neuroscientist and took it literally. Jonah Lehrer, to give him what little credit he deserves, does not actually believe that the author of the Recherche conducted experiments with rats and prions. But Chwe insists that Austen’s novels do not just adumbrate some social-scientific concepts, they represent a pioneering “research program” into game theory (which, again, did not exist) that constituted her essential purpose in creating them. This, apparently, is how you achieve consilience: by pretending that artists are scientists in disguise.

We’ll get to the category errors in a minute. For now, let’s recognise that Chwe, a professor of political science with a PhD in economics, is making two rather large and improbable claims: that Austen programmatically developed such concepts as “choice (a person takes an action because she chooses to do so), preferences (a person chooses the action with the highest payoff), and strategic thinking (before taking an action, a person thinks about how others will act)” – thundering ideas, to be sure – and that she was the very first to show an interest in them.

Chwe falls down the moment he begins to make the case. “The most specific ‘smoking gun’ evidence that Austen is centrally concerned with strategic thinking is how she employs children: when a child appears, it is almost always in a strategic context,” as a pawn or bit player “in an adult’s strategic actions.” Really, that’s the best you can do? First of all, when a child appears in Austen, it isn’t almost always in a strategic context. She also often uses them – Emma’s nieces and nephews, for example, whom we see her love and care for – to certify the goodness of her heroine’s heart. More importantly, what would it prove if she did always use them in a strategic context? Children are not a privileged category of representation; in Austen, in fact, they are a very minor one, never more than incidental to the action. Yes, they are sometimes used strategically – but so are pianos and snowstorms and horses. So what?

The balance of Chwe’s evidence is comparably trivial. As a clincher, he cites the moment where Jane and Elizabeth Bennet find their comically pedantic sister Mary “deep in the study of thorough-bass and human nature.” Thorough-bass, he reasons, is a mathematical approach to music. By having Mary study music and human nature the same way, Austen suggests the possibility of a mathematical approach to the latter – that is, game theory. Don’t worry, I don’t get it either. No one said that Mary studies them the same way, only at the same time. Besides, as everyone but Chwe can see, the character is being held up as a figure of fun, not an intellectual role model. As hard as it is to believe that Austen undertook to construct a systematic approach to human behavior along game-theoretical lines, the notion that she did so within the kind of quantitative framework that exists today – decision trees, decision matrices, numerical inputs and outcomes – is truly idiotic.

As for the question of Austen’s priority as a “game theorist,” there is a grain of truth to the idea. She did depict strategic thinking in everyday social situations with a new depth, a new detail, and a number of new techniques – literary techniques, such as free indirect discourse, not mathematical ones. But she was hardly the first in the field. As even Chwe acknowledges (as quickly as he can), literature has been exploring the mind, and strategic thinking in particular, for as long as it has existed. The Odyssey, the story of a master strategist, is the most obvious early example. But the whole history of stage comedy, with its tricky servants and maneuvering lovers, as well as of dramatic tragedy – Hamlet, Iago, Richard III, Edmund in King Lear (as well as Lear himself, as a failed example), not to mention Marlowe’s Barabas and Jonson’s Volpone – is replete with schemers. The ways that people try to use each other to achieve their ends, and the grief they often come to in the process, is a central subject of classical theater, as well as of a giant chunk of the other narrative genres.

But neither Homer, nor Shakespeare, nor Austen, nor any other writer worth their salt believed that people think only strategically. You see, it is not enough for game theory to analyse strategic thought; at least in Chwe’s account, it regards such thinking as the exclusive explanation of human behavior. Chwe runs through a series of alternatives – emotions, instincts, habits, rules, social factors, ideology, intoxication (not being in your right mind), the constraints of circumstance – claiming to show that Austen rejects them as possible sources of action. But Austen wasn’t dumb enough to think that people never act out of habit or instinct or sudden emotion. All Chwe really shows is that she thought they shouldn’t.

Austen knew, in other words, that human motivation is enormously complex. Reducing it to any single factor – well, for that you need a social scientist. Great literature has the power, through painstaking art, to fashion a convincing representation of human behavior in all its inextricable, mysterious, and endlessly ramifying mixture of sources. That is why it never becomes obsolete. What does become obsolete are the monocausal theories of people such as Chwe. Literature puts back everything the social sciences – by way of methodological simplification, or disciplinary ideology, or just plain foolishness – take out. That is why the finest literature responds to every monocausal theory you can throw at it. Shakespeare was a game theorist, too – and a neuroscientist, and a political scientist, and a Freudian, and a Marxist, and a Lacanian, and a Foucauldian, and all the -ists and -ians that we haven’t yet devised.

Though really, of course, he was none of these. He was a dramatist, just as Austen was a novelist. She didn’t write textbooks, she had no use for concepts, and she wasn’t interested in making arguments. If she had a research program, as Chwe insists, it was into the techniques of fiction and the possibilities of the English language. She was no more a social theorist than Marx or Weber was a novelist. Chwe has much to say about “cluelessness,” the inability to think strategically, another concept he insists that Austen pioneered. After cataloging five Austenian varieties of the phenomenon, he adds an equal number of his own. But he forgets a few. You can also be clueless because you have sworn allegiance to a theory, or because you never learned to handle the material in question, or because you didn’t do the work to find out what you’re talking about, or because you want to get an academic promotion and need to publish another book. Jane Austen, game theorist: as Mencken, the great American bard of cluelessness, said, “There is no idea so stupid that you can’t find a professor who will believe it.” Usually, of course, because he thought it up himself. 

 

Chwe’s book, apparently, has made a stir in social-scientific circles – that is, among the kind of readers who know even less about Jane Austen, and literature in general, than he does. A depressing enough thought, but what really bothers me is that his titular idea is the kind of effluent that contaminates the cultural water supply. Without even opening his book, a lot of otherwise intelligent people are going to go around believing that Jane Austen “was” a game theorist, just as lots of them undoubtedly believe that Proust “was” a neuroscientist. Which means that Chwe’s book, like Lehrer’s, reinforces the notion that art is merely a diffuse or coded form of scientific or social-scientific 
knowledge, and that its insights are valid only insofar as they approximate (or can be made to seem to approximate) those of those disciplines – or worse, the latest fashions in those disciplines.

Lehrer is pretty direct about this. Contemporary science is “true,” and that art is best which best accords with it. “Their art,” he writes in reference to the eight creators, largely modernist, whom he discusses in his book, “proved to be the most accurate, because they most explicitly anticipated our science.” Poor Sophocles, poor Rembrandt. But art is not about being accurate, the way that the solution to an equation can be accurate; it is about being expressive. Art does not have winners. Cézanne might have been “right” about the cognitive science of vision, as Lehrer tells us, but there are many ways, in art, of being right. Raphael, Vermeer, Turner, Matisse – they were also right, and still are.

Insofar as we do sometimes talk about art as if it had winners, it is not because of science. We speak of Shakespeare as supreme among the writers not because he had a systematic conception of human behavior (and if he did, it was probably the medieval theory of the humors), but because his work has been felt to constitute, persistently and by the widest number of people, the most profound and powerful representation – not explanation – of our shared experience. It doesn’t matter, in that respect, what science happens to believe today about the material substrate of that experience, which may not be what it will believe tomorrow. Whom would Lehrer have anointed, in the visual arts, if he had written half a century ago? Not Cézanne, just as it is likely not to be Cézanne half a century from now. Lehrer can point, in retrospect, to the art that best accords with the current state of scientific knowledge, but what about the artist who proposes something science hasn’t (yet) discovered? How can we guess what it will?

Lehrer belongs to the “we used to think ... now we know” school of science writing. He understands that scientific discoveries are always provisional, but he keeps pushing the recognition away. He also knows that art and science do not belong to the same order of knowledge, but he cannot sustain the idea. Although his writing is more stylish than Chwe’s, his command of his material is not much more sophisticated. Before the middle of the nineteenth century, Lehrer believes, the arts were merely “pretty or entertaining.” (You know – Goya, Beethoven, Swift.) Then came modernism, inspired by the science of its time (a claim he never supports and, in seeking to align his subjects with the science of our time, frequently contradicts). Lehrer is the kind of person who believes that people woke up on January 1, 1500 and started speaking Modern English. “Cézanne invented modernist art.” Stravinsky “steeped himself in angst.” As for Gertrude Stein, “after a few years, her revolution petered out, and writers went back to old-fashioned storytelling.” “All of these artists,” Lehrer tells us, “shared an abiding interest in human experience.” Really, all of these artists? “In a move of stunning arrogance and ambition, they tried to invent fictions that told the truth.” Too bad Dante never thought of that. Lehrer, innocent of subtlety or history or depth, with no idea of how much he doesn’t know, is like the college student who comes home for winter break, all eager to regurgitate the things he has learned in Freshman Humanities.

Like Chwe’s, his argument advances through hyperbole, self-contradiction, oversimplification, and sheer incoherence. Maybe those are no more than the failures of these two men in particular, but I think they point to something larger. I have read other efforts to analyse artistic phenomena in scientific terms – most notably, in the emerging field of literary Darwinism, itself an outgrowth of the highly dubious discipline of evolutionary psychology – and they tend to falter in the same sorts of ways. At best, they tell us things we already know – and know immensely better – through humanistic means. They are almost always either crushingly banal or desperately wrongheaded. Pride and Prejudice is about mate selection. Hamlet struggles to choose between personal and genetic self-interest: killing Claudius and usurping his throne (but the latter never crosses his mind) or letting Gertrude furnish him with siblings (though since Hamlet is already thirty, that isn’t all too probable). Interpretive questions are not responsive to scientific methods. It isn’t even like using a chainsaw instead of a scalpel; it’s like using a chainsaw instead of a stethoscope. The instrument is not too crude; it is the wrong kind altogether.

The problem of “the two cultures” is not, in fact, a problem at all. There’s a reason that art and science are distinct. They don’t just work in different ways; they work on different things. Science addresses external reality, which lies outside our minds and makes itself available for objective observation. The arts address our experience of the world; they tell us what reality feels like. That is why the chain of consilience ruptures as we make the leap from material phenomena to the phenomena of art. Physics can explain chemistry, which can explain biology, which can explain psychology, and psychology might someday tell us, at least in the most general terms, how we create art and why we respond to it. But it will never account for the texture, the particularities, of individual works, or tell us what they mean. Nor will it explain the history of art: its moments, its movements, the evolution of its modes and styles, the labyrinths of influence that join its individual creators. The problem isn’t just that there is so much data that is unrecoverable. It’s too late now to turn up Sappho’s DNA, but even if we dug up Austen’s bones and sequenced her
genome, it will never tell us why she wrote Persuasion, or how she came up with the opening of Pride and Prejudice, or what we are supposed to make of Emma. Art is experiential. It doesn’t just speak of experience; it needs to be experienced itself, inhabited in ways that proofs and formulae do not. And experience cannot be weighed or measured; it can only be evoked.

Scientists did not appreciate it when the “science studies” hucksters were attempting to usurp their turf, nor should they have. Even the disciples of consilience seemingly retain a dim awareness of the independent validity of humanistic knowledge, as witnessed by their tendency to appeal to Shakespeare – that is, to argue that he lends support to this or that contemporary view of human nature. So here is a suggestion: why not simply go to him directly, and learn what else he had to say? And after Shakespeare, you can turn to Virgil, and Goethe, and Tolstoy, and Rumi, and Murasaki – and, well, the possibilities are endless. You can give yourself, in other words, a humanistic education. And that is neither game nor theory.

William Deresiewicz is a contributing editor at The New Republic. His new book, Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life (Free Press), will be published this summer.

This article first appeared on newrepublic.com

 

Austen knew that human motivation is enormously complex, but that doesn't make her a game theorist. Image: Hulton Archive/Getty Images
FRED TOMASELLI/PRIVATE COLLECTION/BRIDGEMAN IMAGES
Show Hide image

How nature created consciousness – and our brains became minds

In From Bacteria to Bach and Back, Daniel C Dennett investigates the evolution of consciousness.

In the preface to his new book, the ­philosopher Daniel Dennett announces proudly that what we are about to read is “the sketch, the backbone, of the best scientific theory to date of how our minds came into existence”. By the end, the reader may consider it more scribble than spine – at least as far as an account of the origins of human consciousness goes. But this is still a superb book about evolution, engineering, information and design. It ranges from neuroscience to nesting birds, from computing theory to jazz, and there is something fascinating on every page.

The term “design” has a bad reputation in biology because it has been co-opted by creationists disguised as theorists of “intelligent design”. Nature is the blind watchmaker (in Richard Dawkins’s phrase), dumbly building remarkable structures through a process of random accretion and winnowing over vast spans of time. Nonetheless, Dennett argues stylishly, asking “design” questions about evolution shouldn’t be ­taboo, because “biology is reverse engin­eering”: asking what some phenomenon or structure is for is an excellent way to understand how it might have arisen.

Just as in nature there is design without a designer, so in many natural phenomena we can observe what Dennett calls “competence without comprehension”. Evolution does not understand nightingales, but it builds them; your immune system does not understand disease. Termites do not build their mounds according to blueprints, and yet the results are remarkably complex: reminiscent in one case, as Dennett notes, of Gaudí’s church the Sagrada Família. In general, evolution and its living products are saturated with competence without comprehension, with “unintelligent design”.

The question, therefore, is twofold. Why did “intelligent design” of the kind human beings exhibit – by building robotic cars or writing books – come about at all, if unintelligent design yields such impressive results? And how did the unintelligent-design process of evolution ever build intelligent designers like us in the first place? In sum, how did nature get from bacteria to Bach?

Dennett’s answer depends on memes – self-replicating units of cultural evolution, metaphorical viruses of the mind. Today we mostly use “meme” to mean something that is shared on social media, but in Richard Dawkins’s original formulation of the idea, a meme can be anything that is culturally transmitted and undergoes change: melodies, ideas, clothing fashions, ways of building pots, and so forth. Some might say that the only good example of a meme is the very idea of a meme, given that it has replicated efficiently over the years despite being of no use whatsoever to its hosts. (The biologist Stephen Jay Gould, for one, didn’t believe in memes.) But Dennett thinks that memes add something important to discussions of “cultural evolution” (a contested idea in its own right) that is not captured by established disciplines such as history or sociology.

The memes Dennett has in mind here are words: after all, they reproduce, with variation, in a changing environment (the mind of a host). Somehow, early vocalisations in our species became standardised as words. They acquired usefulness and meaning, and so, gradually, their use spread. Eventually, words became the tools that enabled our brains to reflect on what they were ­doing, thus bootstrapping themselves into full consciousness. The “meme invasion”, as Dennett puts it, “turned our brains into minds”. The idea that language had a critical role to play in the development of human consciousness is very plausible and not, in broad outline, new. The question is how much Dennett’s version leaves to explain.

Before the reader arrives at that crux, there are many useful philosophical interludes: on different senses of “why” (why as in “how come?” against why as in “what for?”), or in the “strange inversions of reasoning” offered by Darwin (the notion that competence does not require comprehension), Alan Turing (that a perfect computing machine need not know what arithmetic is) and David Hume (that causation is a projection of our minds and not something we perceive directly). Dennett suggests that the era of intelligent design may be coming to an end; after all, our best AIs, such as the ­AlphaGo program (which beat the human European champion of the boardgame Go 5-0 in a 2015 match), are these days created as learning systems that will teach themselves what to do. But our sunny and convivial host is not as worried as some about an imminent takeover by intelligent machines; the more pressing problem, he argues persuasively, is that we usually trust computerised systems to an extent they don’t deserve. His final call for critical thinking tools to be made widely available is timely and admirable. What remains puzzlingly vague to the end, however, is whether Dennett actually thinks human consciousness – the entire book’s explanandum – is real; and even what exactly he means by the term.

Dennett’s 1991 book, Consciousness Explained, seemed to some people to deny the existence of consciousness at all, so waggish critics retitled it Consciousness Explained Away. Yet it was never quite clear just what Dennett was claiming didn’t exist. In this new book, confusion persists, owing to his reluctance to define his terms. When he says “consciousness” he appears to mean reflective self-consciousness (I am aware that I am aware), whereas many other philosophers use “consciousness” to mean ordinary awareness, or experience. There ensues much sparring with straw men, as when he ridicules thinkers who assume that gorillas, say, have consciousness. They almost certainly don’t in his sense, and they almost certainly do in his opponents’ sense. (A gorilla, we may be pretty confident, has experience in the way that a volcano or a cloud does not.)

More unnecessary confusion, in which one begins to suspect Dennett takes a polemical delight, arises from his continued use of the term “illusion”. Consciousness, he has long said, is an illusion: we think we have it, but we don’t. But what is it that we are fooled into believing in? It can’t be experience itself: as the philosopher Galen Strawson has pointed out, the claim that I only seem to have experience presupposes that I really am having experience – the experience of there seeming to be something. And throughout this book, Dennett’s language implies that he thinks consciousness is real: he refers to “conscious thinking in H[omo] sapiens”, to people’s “private thoughts and experiences”, to our “proper minds, enculturated minds full of thinking tools”, and to “a ‘rich mental life’ in the sense of a conscious life like ours”.

The way in which this conscious life is allegedly illusory is finally explained in terms of a “user illusion”, such as the desktop on a computer operating system. We move files around on our screen desktop, but the way the computer works under the hood bears no relation to these pictorial metaphors. Similarly, Dennett writes, we think we are consistent “selves”, able to perceive the world as it is directly, and acting for rational reasons. But by far the bulk of what is going on in the brain is unconscious, ­low-level processing by neurons, to which we have no access. Therefore we are stuck at an ­“illusory” level, incapable of experiencing how our brains work.

This picture of our conscious mind is rather like Freud’s ego, precariously balan­ced atop a seething unconscious with an entirely different agenda. Dennett explains wonderfully what we now know, or at least compellingly theorise, about how much unconscious guessing, prediction and logical inference is done by our brains to produce even a very simple experience such as seeing a table. Still, to call our normal experience of things an “illusion” is, arguably, to privilege one level of explanation arbitrarily over another. If you ask me what is happening on my computer at the moment, I shall reply that I am writing a book review on a word processor. If I embarked instead on a description of electrical impulses running through the CPU, you would think I was being sarcastically obtuse. The normal answer is perfectly true. It’s also true that I am currently seeing my laptop screen even as this experience depends on innumerable neural processes of guessing and reconstruction.

The upshot is that, by the end of this brilliant book, the one thing that hasn’t been explained is consciousness. How does first-person experience – the experience you are having now, reading these words – arise from the electrochemical interactions of neurons? No one has even the beginnings of a plausible theory, which is why the question has been called the “Hard Problem”. Dennett’s story is that human consciousness arose because our brains were colonised by word-memes; but how did that do the trick? No explanation is forthcoming. Dennett likes to say the Hard Problem just doesn’t exist, but ignoring it won’t make it go away – even if, as his own book demonstrates, you can ignore it and still do a lot of deep and fascinating thinking about human beings and our place in nature.

Steven Poole’s books include “Rethink: the Surprising History of New Ideas” (Random House Books)

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times