Iraq: Why Blair was right

The responsibility to protect remains a powerful moral imperative.

I and others who supported the invasion of Iraq a decade ago did so because we thought that Saddam Hussein’s regime was among the worst in the world. This was, on the one hand, because of his propensity to violence against his neighbours – Iran during the 1980-88 war and Kuwait, which he attempted to annex in 1990 – and, on the other, because of his eager sponsorship of terrorist groups that saw the destruction of Israel as their life’s (and death’s) work, as did he.

But more significant still was the active delight in savagery in which he indulged, and passed on to his sons and presumed heirs. This he visited on the Kurds, on the southern marsh Arabs and on those, including members of his entourage, whom he suspected of disloyalty or who were linked to supposed traitors. It was a savagery which, unlike that of his hero Stalin, was not governed by a great deal of rational calculation: Saddam’s war against Iran was a disaster for his nation and the invasion of Kuwait still more so. The possibility that such a man might possess weapons of mass destruction was a nightmare for the world.

In the event, it seemed he did not possess, or no longer possessed, the weapons he either had once had, or desired and planned to have again once the sanctions regime loosened, as he reckoned it sooner or later would – a reasonable calculation, incidentally. The US and UK intelligence services believed he did have WMDs, as did all the other states with large foreign intelligence capabilities. According to some accounts, Saddam believed it, too, misled by aides who were afraid to tell him that the weapons had been destroyed. They were all wrong.

Two reports – by the UK’s Butler review of July 2004 (led by Lord Butler, a former cabinet secretary) and by the US Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction in March the following year – found that the respective intelligence agencies had made grave errors. The US commission was the harsher of the two, characterising the mistakes as cumulatively “one of the most public – and most damaging – intelligence failures in recent American history . . . in large part the result of analytical shortcomings; intelligence analysts were too wedded to their assumptions about Saddam’s intentions”.

The Butler report was more restrained, but also pointed to weaknesses in MI6’s checking of sources, a reliance on third- or fourth-hand sources and on dissidents, a surprising neglect of thorough information-checking in the prelude to the invasion by security officials and ministers alike, and, in particular, the weakness of the claim that Iraq could fire off rockets containing WMDs within 45 minutes – for which there was a source, but a highly dubious one.

However, Butler also concluded that Iraq was actively seeking WMDs, including the development of a nuclear weapons programme, before the invasion; that it was developing ballistic missiles with a longer range than permitted; and that the claim that it was trying to import uranium from Niger was credible, as was another that it was trying to buy mobile biological weapons labs.

Neither the UK nor the US reports was the last word; no last word will ever be said, though the report of the Chilcot inquiry, more comprehensive than that of Butler and expected to run to a million words, is now scheduled for publication towards the end of this year. There is some crucial material that is still classified – including notes between the then British prime minister, Tony Blair, and the US president George W Bush.

Nonetheless, I maintain that an adequate summary of the position now is that Bush determined to attack Iraq because he believed, on some false premises and some sound ones, that it was an active regional and even global threat. Blair supported Bush because he believed the same, though he put more stress, both in private and in public, on the moral case for intervention, holding to the thesis that states have a “responsibility to protect” their citizens which, once flouted grossly and over time, is a prompt for external intervention. The doctrine of the “responsibility to protect” was then (and is still) a subject of much discussion at the United Nations. There was nothing resembling a consensus around it, and there will not be one any time soon, at least as long as powerful nations such as China and Russia regard it as being against their interests and while others, notably Germany and France, see it as being, at best, a very occasional duty.

Opponents of the war in Iraq, whom I would guess are in the large majority in Europe and the US, often point to a wide spectrum of reasons for the invasion beyond the ones officially proclaimed. These include: the US’s desire to secure oil supplies; its wish to demonstrate in the harshest possible fashion its global dominance; revenge on the part of George W Bush for the attempted assassination by Iraqi agents of his father, the former president George H W Bush; Blair’s willingness to be an obedient lackey of Washington in the hope that Britain might get a share of the oil and other loot to be expected from an invasion; and his self-image as a global superhero, righting wrongs across the world. In many countries, especially in the Arab world but also in Europe, the invasion is seen as proof of Jewish control of finance, politics and the media, as well as Jewish sponsorship of the attacks on the Twin Towers and the Pentagon on 11 September 2001 in order to create a casus belli.

Much attention has focused on two issues: the falsity of the claim that Iraq possessed stocks of WMDs, with the suspicion, often amounting to certainty, that the intelligence was manipulated to show what Bush and Blair wanted it to show; and, in the UK, the belief that Blair had agreed with Bush to support the invasion long before he announced that the UK would participate, and also before parliament and the cabinet had agreed, as both did. Indeed, at least in the UK, these two issues usually drown out all others, especially on the left.

Experience over the past decade has shown that argument over these produces nothing but mutual incomprehension. For the record, I believe that: a) both the US and the UK governments accepted intelligence that pointed to Iraqi possession of WMDs, but interpreted it in the way most favourable to the case for invasion and b) that Blair wished to support the US largely because he had long thought Saddam a major threat (though he had failed to convince the former president Bill Clinton, who also believed that Iraq had WMDs, to take action), but he insisted that Bush take his invasion plan to the UN before giving Britain’s formal agreement.

In the immediate aftermath of the invasion, attitudes on the British left hardened quickly into a self-satisfied conviction that opposition was justified, together with the belief in many quarters that the UK’s involvement in Iraq was emblematic of a New Labour administration that was mendacious, servile to the US and scornful of the UN, the European Union and the rest of the “international community”. This was, and still is, the view of many senior officials. The attitude entailed – not everywhere, it ought to be said – a deliberate blindness to the dangers of a Saddam-led Iraq and to the clear danger that his determination to become a WMD-armed state would pose. There was blindness, too, to the American and British containment of Iraq, with almost no support from other European states by means of a no-fly zone over Kurdish areas. And little attention was paid to the failure of the French, the Russians (who led the opposition to the invasion), the Germans and any other Europeans to develop what the American political philosopher Michael Walzer has called the “little war” alternative to the “big war” by the US and the UK – a little war that would have tightened the sanctions regime to the point where Saddam might have been required to change his behaviour.

There was some left-wing support for the post-invasion reconstruction of Iraq, which came mainly from elements in the British trade unions, rallied in large part by the tireless work of Abdullah Muhsin, an official of the students’ union in Iraq who had been forced to emigrate, and aided by the parliamentary researcher Gary Kent. Muhsin, Kent and others put before the left in this country the facts of the suppression of trade unions and workers’ movements, as well as calling for the occupying forces to leave Iraq and hand over power to democratic parties. The support came in the form of a handful of conference resolutions and a few visits to Iraq in solidarity (I went on one, as a journalist), but it had little practical effect.

There has also been a wilful blindness to the passivity of the EU on this and other security matters – something that is becoming more salient as the US furls its global security umbrella and concentrates on developing a relationship with China. Europe has not and probably will not (at least not soon) develop anything like a common security and defence policy. Strategic thinking is required, therefore, especially on the part of the major states, about how Europe could operate as a loose gathering of countries to promote peace and freedom beyond its borders. Unfortunately, much of the left’s rhetoric has remained on the level of blame – for the US, New Labour and Israel – with little engagement with the threats, and possibilities, of the world as it is now.

Those of us who were for the invasion may still be “right” – right, that is, on the kind of timescale assumed in Zhou Enlai’s supposed remark to Henry Kissinger in 1971 to the effect that it was “too soon to tell” what the consequences were of the French Revolution two centuries earlier. (It now seems likely that Zhou was referring to the 1968 students’ revolt in Paris, but, as a US diplomat present at the conversation said, the misunderstanding was “too delicious to invite correction”.) It cannot be known what would have happened if Saddam had remained in power: my guess is that sanctions would have decayed and that relatively soon there would have been a WMD-armed Iraq, just as there is likely to be a nuclear-armed Iran and already is a nuclear-armed North Korea.

Rightly or wrongly, we were too sanguine about the prospects of regime change, especially in the light of the mess made of it. This was perhaps inevitable, because even well-trained armies are good at destroying but not fitted for constructing. In my case, nearly a decade of reporting in central and eastern Europe and the former Soviet Union for the Financial Times predisposed me to see western intervention (not military in that instance) as benign, as I believe it generally was. But we did not anticipate that Iraqi forces who hated the US – including those loyal to Saddam – would dominate after the invasion, that the population would not be active in ensuring democratic choice as it had been in, say, Poland, and that the west had limited staying power. We were much influenced by Kanan Makiya’s searing book Cruelty and Silence (1993), which detailed the horrors of Iraq under Saddam and called for intervention – an intervention, the author argued, that would be greeted with “sweets and flowers”.

However, the responsibility to protect remains a powerful moral imperative. It must remain part of the armoury of those states with the power and the will to stop tyranny where it is possible to do so and where intervention is likely to work – as it did in Sierra Leone, in Kosovo and ultimately in Bosnia. It may work in Mali. More thought needs to be given to how it might work in Syria. For the left, the responsibility to protect should be part of a progressive view of global problems. That the principle has become synonymous with a kind of refurbished imperialism is a sign of decadence.

John Lloyd is a contributing editor of the Financial Times and a former editor of the New Statesman

A protestor hurls stones at a poster of Saddam Hussein in Baghdad on 10 April 2003. Photograph: Alex Majoli/Magnum Photos

This article first appeared in the 18 February 2013 issue of the New Statesman, Iraq: ten years on

The Science & Society Picture Library
Show Hide image

This Ada Lovelace Day, let’s celebrate women in tech while confronting its sexist culture

In an industry where men hold most of the jobs and write most of the code, celebrating women's contributions on one day a year isn't enough. 

Ada Lovelace wrote the world’s first computer program. In the 1840s Charles Babbage, now known as the “father of the computer”, designed (though never built) the “Analytical Engine”, a machine which could accurately and reproducibly calculate the answers to maths problems. While translating an article by an Italian mathematician about the machine, Lovelace included a written algorithm for which would allow the engine to calculate a sequence of Bernoulli numbers.

Around 170 years later, Whitney Wolfe, one of the founders of dating app Tinder, was allegedly forced to resign from the company. According to a lawsuit she later filed against the app and its parent company, she had her co-founder title removed because, the male founders argued, it would look “slutty”, and because “Facebook and Snapchat don’t have girl founders. It just makes it look like Tinder was some accident". (They settled out of court.)

Today, 13 October, is Ada Lovelace day – an international celebration of inspirational women in science, technology, engineering and mathematics (STEM). It’s lucky we have this day of remembrance, because, as Wolfe’s story demonstrates, we also spend a lot of time forgetting and sidelining women in tech. In the wash of pale male founders of the tech giants that rule the industry,we don't often think about the women that shaped its foundations: Judith Estrin, one of the designers of TCP/IP, for example, or Radia Perlman, inventor of the spanning-tree protocol. Both inventions sound complicated, and they are – they’re some of the vital building blocks that allow the internet to function. 

And yet David Streitfield, a Pulitzer-prize winning journalist, someow felt it accurate to write in 2012: “Men invented the internet. And not just any men. Men with pocket protectors. Men who idolised Mr Spock and cried when Steve Jobs died.”

Perhaps we forget about tech's founding women because the needle has swung so far into the other direction. A huge proportion – perhaps even 90 per cent - of the world’s code is written by men. At Google, women fill 17 per cent of technical roles. At Facebook, 15 per cent. Over 90 per cent of the code respositories on Github, an online service used throughout the industry, are owned by men. Yet it's also hard to believe that this erasure of women's role in tech is completely accidental. As Elissa Shevinsky writes in the introduction to a collection of essays on gender in tech, Lean Out: “This myth of the nerdy male founder has been perpetuated by men who found this story favourable."

Does it matter? It’s hard to believe that it doesn’t. Our society is increasingly defined and delineated by code and the things it builds. Small slip-ups, like the lack of a period tracker on the original Apple Watch, or fitness trackers too big for some women’s wrists, gesture to the fact that these technologies are built by male-dominated teams, for a male audience.

In Lean Out, one essay written by a Twitter-based “start-up dinosaur” (don’t ask) explains how dangerous it is to allow one small segment of society to built the future for the rest of us:

If you let someone else build tomorrow, tomorrow will belong to someone else. They will build a better tomorrow for everyone like them… For tomorrow to be for everyone, everyone needs to be the one [sic] that build it.

So where did all the women go? How did we get from a rash of female inventors to a situation where the major female presence at an Apple iPhone launch is a model’s face projected onto a screen and photoshopped into a smile by a male demonstrator? 

Photo: Apple.

The toxic culture of many tech workplaces could be a cause or an effect of the lack of women in the industry, but it certainly can’t make make it easy to stay. Behaviours range from the ignorant - Martha Lane-Fox, founder of, often asked “what happens if you get pregnant?” at investors' meetings - to the much more sinister. An essay in Lean Out by Katy Levinson details her experiences of sexual harassment while working in tech: 

I have had interviewers attempt to solicit sexual favors from me mid-interview and discuss in significant detail precisely what they would like to do. All of these things have happened either in Silicon Valley working in tech, in an educational institution to get me there, or in a technical internship.

Others featured in the book joined in with the low-level sexism and racism  of their male colleagues in order to "fit in" and deflect negative attention. Erica Joy writes that while working in IT at the University of Alaska as the only woman (and only black person) on her team, she laughed at colleagues' "terribly racist and sexist jokes" and "co-opted their negative attitudes”. 

The casual culture and allegedly meritocratic hierarchies of tech companies may actually be encouraging this discriminatory atmosphere. HR and the strict reporting procedures of large corporates at least give those suffering from discrimination a place to go. A casual office environment can discourage reporting or calling out prejudiced humour or remarks. Brook Shelley, a woman who transitioned while working in tech, notes: "No one wants to be the office mother". So instead, you join in and hope for the best. 

And, of course, there's no reason why people working in tech would have fewer issues with discrimination than those in other industries. A childhood spent as a "nerd" can also spawn its own brand of misogyny - Katherine Cross writes in Lean Out that “to many of these men [working in these fields] is all too easy to subconciously confound women who say ‘this is sexist’ with the young girls who said… ‘You’re gross and a creep and I’ll never date you'". During GamerGate, Anita Sarkeesian was often called a "prom queen" by trolls. 

When I spoke to Alexa Clay, entrepreneur and co-author of the Misfit Economy, she confirmed that there's a strange, low-lurking sexism in the start-up economy: “They have all very open and free, but underneath it there's still something really patriarchal.” Start-ups, after all, are a culture which celebrates risk-taking, something which women are societally discouraged from doing. As Clay says, 

“Men are allowed to fail in tech. You have these young guys who these old guys adopt and mentor. If his app doesn’t work, the mentor just shrugs it off. I would not be able ot get away with that, and I think women and minorities aren't allowed to take the same amount of risks, particularly in these communities. If you fail, no one's saying that's fine.

The conclusion of Lean Out, and of women in tech I have spoken to, isn’t that more women, over time, will enter these industries and seamlessly integrate – it’s that tech culture needs to change, or its lack of diversity will become even more severe. Shevinsky writes:

The reason why we don't have more women in tech is not because of a lack of STEM education. It's because too many high profile and influential individuals and subcultures within the tech industry have ignored or outright mistreated women applicants and employees. To be succinct—the problem isn't women, it's tech culture.

Software engineer Kate Heddleston has a wonderful and chilling metaphor about the way we treat women in STEM. Women are, she writes, the “canary in the coal mine”. If one dies, surely you should take that as a sign that the mine is uninhabitable – that there’s something toxic in the air. “Instead, the industry is looking at the canary, wondering why it can’t breathe, saying ‘Lean in, canary, lean in!’. When one canary dies they get a new one because getting more canaries is how you fix the lack of canaries, right? Except the problem is that there isn't enough oxygen in the coal mine, not that there are too few canaries.” We need more women in STEM, and, I’d argue, in tech in particular, but we need to make sure the air is breatheable first. 

Barbara Speed is a technology and digital culture writer at the New Statesman and a staff writer at CityMetric.