New Times,
New Thinking.

The end of accountability

In politics and business, faceless systems have taken over decision-making and infantilised socity.

By Ed Smith

The critical analogy in this quirky and very intelligent book comes at its midpoint, awarded a little chapter all its own. “If you cut the connection between a cat’s cerebellum and the rest of its brain, it still looks alive,” Dan Davies writes. “It can walk, right itself when it falls over, and eat and drink when food and water are placed in front of it. It might even be able to groom and clean itself.” Next comes the point of the story: “But it is no longer capable of purposive action… it can’t generate its own responses to change.”

As with “decebrate cats”, Davies adds, “you will often see a ‘decebrate organisation’.” The example cited is a university. But I scribbled in the margin “UK government”.

Davies got the decebrate cat/organisation analogy from Stafford Beer (1926-2002), the theorist and founder of “management cybernetics”, whose ideas are presented as a missed opportunity in the development of organisational thinking and a potential antidote for our current woes. More on him in a bit.

The Unaccountability Machine explores how organisations get into the bizarre but common situation of acting in line with “process” but against all logic. At the extreme end of the spectrum, Davies cites the Dutch airline KLM, which in 1999 discovered that 440 pet squirrels had been shipped on a flight without appropriate paperwork. Airport staff threw the squirrels into an industrial shredder. After the inevitably outraged public response, KLM issued a statement expressing “sincere regret” about this “unethical” outcome – while still protecting its position by saying that the employees had acted “formally correctly” in following government instructions.

Wrong – yes. Accountable – no. Sounds a lot like dealing with most big organisations. Hence our rage when presented with an employee – often the only human being we can find to talk to – whose exact role is to explain that they have no agency over the decision that caused the problem.

This accelerating trend may not be the result of conscious planning, but it’s certainly not accidental. The system is serving itself. And though he quickly softens the thesis, Davies opens with the hypothesis that “any organisation in a modern industrial society will tend to restructure itself so as to reduce the amount of personal responsibility attributable to its actions… until crisis results”. In short, systems naturally evolve into unaccountability machines.

We’ve all been on the wrong end of systems that have been designed to eliminate the capacity for absorbing nuanced context, and that are being executed without the exercise of human judgement. Worse still, such reductionism is usually presented not only as “efficient” (ie cheap) but also as “scientific”. Of course, these systems are quite the opposite. Applying judgement-free, quasi-scientific methods in inappropriate settings makes things worse rather than better – generating processes that infantilise and infuriate us, while resulting in reliably dumb decisions.

Give a gift subscription to the New Statesman this Christmas from just £49

Davies is a surprising and provocative guide to an obvious societal problem that does not have an obvious fix. From Davies’ perspective as a professional economist, his own subject – by coercing real-world information into dubious models that must yield an optimal “solution” – stands accused of creating “accountability sinks” in modern organisations. The “system” evolves, so it’s impossible to know where decisions originate. The centrality of economics in recent decades, Davies argues, has created a disastrous paradox: structures that promise clear solutions while getting them wrong – and opaquely so.

He rails against economics with a wry intensity, joining a distinguished group of thinkers (Nassim Nicholas Taleb, John Kay, Mervyn King) who have gone after the discipline. You begin to think that economics doesn’t need a rethink so much as a cornerman with a sense of pity, an Angelo Dundee figure, ready to shout, “Stop the fight – my guy has suffered enough!” We’re well past swabbing cuts and deep into white-towel territory.

Nonetheless, as an economics outsider, I wonder if Davies gives the discipline too much credit for doing too much harm. Was it overconfidence in economic thinking that led organisations to trust in dubious modelling, or was it the general pre-eminence of “scientific” modes of thought that led economics to take a wrong turn? An alternative reading of Davies’ argument pins less blame on the rise of economics, and more on the wider decline of the humanities.

Across most areas of modern life, the concept of good judgement has been in retreat, at the expense of following “what the science says” – even when the situation under review doesn’t yield to scientific reasoning. The Covid lockdowns were a classic example. There is no scientific way of weighing the future psychological health of young people, deprived of formative socialisation that cannot be experienced later, against loss of life among generally older people. “What the science says” was often a way of avoiding hard thinking and choices.

There is an even wider framing of this issue than the decline of the humanities. What’s now lacking is healthy scepticism about misplaced rationalist systems that are running out of control at the expense of common sense and institutional restraint. Isn’t that the age-old position of small “c” conservativism?

Davies outlines Beer’s metaphor for types of systems and how to think about them. A “clear box” is something like a machine in a factory – knowledge is complete and bounded, and it’s obvious exactly how the process works. In a “muddy box”, you have some understanding of the relationship between inputs and outputs, but “the system keeps a few secrets” from you. With “black boxes”, you have to acknowledge that you’ll never understand the full complexity of the system.

The primary requirement when approaching a black-box system is to respect your ignorance and stop acting as though you can crack the code. Instead you must observe, gradually figuring out how the system behaves. Davies credits Stafford Beer and his discipline of cybernetics for understanding this point.

The implication is that if we’d listened to Beer in recent decades, rather than mathematical economics, we’d be better off. But I finished the book unclear how much Beer deserved his role as its hero. Yes, navigating black boxes requires more than a policy that can be written down and rationally implemented. Instead, it will depend on expert common sense, integrated thinking about how the parts and the whole interact, as well as – probably – inherited traditions and influences that people might not fully understand even as they benefit from them. By now, readers will have guessed the point: isn’t this the fundamental conservative idea of “institutional wisdom”?

I’d push this argument up one notch further. Effective leaders in complex decision-making areas often bring with them a sense of mystery. I believe that such mystery is not deliberate pretentiousness, but instead derived from necessity. They genuinely don’t – and can’t – know exactly how they do things. As the cellist Robert Ripley wrote, “The better a conductor is, the less you know why.”

I have a second difference of emphasis to Davies. Where he fixes on the concept of accountability as the key issue, another reading of the present mess in politics and business would position accountability within a group of overlapping terms and concepts – and not necessarily as the most important one. Liability is the strongest and clearest: if things go wrong, you’re going to court. Close behind liability follows accountability: if things go wrong, you’re going to get blamed. (Increasing litigiousness, of course, leads to a convergence of liability and accountability.)

Overlapping both liability and accountability, but without the inbuilt comeuppance, is the concept of responsibility. It’s vaguer, and rests on moral rather than legal or procedural authority. But I’d argue that responsibility is more relevant (and useful) than accountability in most everyday contexts. And it’s responsibility, just as often as accountability, that bad systems and organisations are really fleeing from.

In talking too much about accountability, we don’t talk enough about responsibility. Let me give a personal example. When I was working as chief selector for England cricket, one criticism of the system was that we selectors were not “accountable” when England lost a match. So: did we feel accountable? As we neither batted, bowled or coached in any individual game, technically it would be a stretch to say “yes”. But did we feel responsible? Most certainly. And responsibility deepened over the sequence of several matches, as selection decisions played out more fully (and individual results were less exposed to randomness).

More importantly, when we had the right people in the room (among the selectors, coach, captain, data analyst etc) the question of accountability never came up. Because we were all deeply committed to adding value to the team in different ways, there was more responsibility to go around than there was blame to apportion. Responsibility, unlike accountability, is not zero sum. By expanding the collective reserves of responsibility, organisations can increase their bandwidth and their bravery (even though the concept cannot be measured and it won’t show up in organisational charts). Overall, I suspect accountability is most useful as a concept when it flows naturally from good institutional habits, rather than being the founding goal of a good process.

Davies’ subtitle refers to “big systems” – but accountability sinks are everywhere now, infecting private corners of everyday life. Anyone who is a volunteer sports coach has probably been on the wrong end of a club sports app, which “helps” with things we used not to need help with – such as telling parents if their child has been selected in the under-11s at the weekend.

What’s the driver of appifying this basic stuff? I used to think it was regulatory – something about DBS checks or similar. Then I thought it might be financial: that we had reached the point where no one felt comfortable saying, “The match fee today to cover the umpires and cricket balls will be £5,” hence an app was invented to retain credit card details and do it automatically. But now I think it’s more general. The app creates a faceless buffer between decisions and decision-makers. In the process, a “convenience” is expanding into an unavoidable (and unaccountable) intermediary, and apps are morphing into a techno-priestly managerial layer that works against people talking directly to each other. The result is that you end up really hating the app, without really understanding how decisions are made. The stock of ill-will increases, and the agency of judgements by actual human beings is reduced by yet another increment.

While reading Dan Davies’ bleak analysis, sometimes you’ll nod, sometimes smile, sometimes grimace. What’s clear – whether you want to call them “accountability sinks” or “responsibility voids” – is that we need nothing less than a full-blown renaissance to get us out of the state we’re in.

Ed Smith is director of the Institute of Sports Humanities

The Unaccountability Machine: Why Big Systems Make Terrible Decisions – and How the World Lost Its Mind
Dan Davies
Profile Books, 304pp, £22

Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops

[See also: Trapped by apps]

Content from our partners
When partnerships pay off
Breaking down barriers for the next generation
How to tackle economic inactivity

Topics in this article : ,

This article appears in the 02 Jul 2024 issue of the New Statesman, Labour’s Britain