View all newsletters
Sign up to our newsletters

Support 110 years of independent journalism.

  1. Long reads
3 November 2023

Sam Bankman-Fried and the effective altruism delusion

Will the idealist philosophy survive the conviction of its crypto king?

By Sophie McBain

Editors note: This article was originally published on 20 September. On 3 November, “Crypto King Sam Bankman-Fried was found guilty of fraud and money laundering. Sophie McBain looks at whether the idealist effective altruism philosophy will survive his conviction.

The moral philosopher met a brilliant American maths student who wanted to know how he could do the most good in the world. How about becoming a banker, the philosopher said.

No, seriously.

Think of it like this. With his high grades, the student could do pretty much anything. He could become a doctor in a poor country, and then he might perform life-saving surgery ten times a week, saving 500 lives a year. But if he became a banker he could donate enough money to finance the salaries of ten doctors, saving ten times as many lives. Plus, if he didn’t become a doctor, someone else was likely to do the same job. But if he didn’t become a banker, who else would step in to fund the doctors?

William MacAskill, the moral philosopher, was a leading “effective altruist”, someone who thought about morality in a scientific way, crunching the numbers to determine the “best” method of helping others. He had helped found a careers-advice organisation pitched at top graduates, one that encouraged would-be aid workers to consider more lucrative work so that they could “earn to give”.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

[See also: “The biggest Ponzi of all time”: why Ben McKenzie became a crypto critic]

And so the American maths student, Sam Bankman-Fried, took a job in finance, where he developed an interest in crypto. He set up his own firm and went on to make billions on crypto arbitrage, taking advantage of the price discrepancies that can arise in the digital-currency markets. He pledged hundreds of millions in donations, mostly to organisations linked to his friend, the moral philosopher. In interviews the student, by now a famous crypto king, explained that your incentives are different when you are making money in order to give it away. When you’re doing good, there are no diminishing marginal returns – and so it makes sense to take bigger gambles, to be aggressive.

But the crypto king was also taking different kinds of risk. In December 2022 he was arrested and accused of defrauding his customers, owing them approximately $8bn. In February 2023, he asked his donees to return their gifts, or risk them being seized during bankruptcy proceedings.

On Twitter the philosopher expressed his “utter rage” at the crypto king, and his “sadness and self-hatred” for falling for his deception. He acknowledged that he had not heeded concerns that his own moral ideas could be misused. The crypto king’s actions ran contrary to effective altruism, which emphasises “the importance of integrity, honesty and the respect of common-sense moral constraints”, he tweeted.

Which would have been an adequate response if the story of the moral philosopher and the crypto king were merely a thought experiment, an abstract scenario used to test an idea. But real life is messier and more complex. On 2 November, the crypto king was convicted in a New York court on seven counts of fraud and money laundering. Questions have been asked of the philosopher, too: how much did he know? Was his big idea fundamentally flawed?

The story of how the Oxford philosopher MacAskill and the effective altruism (EA) movement found itself entangled with the disgraced Bankman-Fried is partly about moral principles. Why should someone like Bankman-Fried find EA’s ideology so compelling – or at least convenient? But mostly it is about politics, the corruptive influence of power and money – the unforeseen (but not unpredictable) consequences of telling people that the best way to do good is to get rich.

Now, a thought experiment. Imagine you find a toddler drowning in a shallow pond. Should you wade in and save them, even if your clothes get muddy? Well, of course! Would you agree, then, that if it’s in your power to stop something very bad from happening, at minimal cost to yourself, you should do it?

What if a child is at risk of drowning in floodwaters many thousands of miles away, and a small donation could save their life? In his 1972 paper “Famine, Affluence, and Morality” the Australian philosopher Peter Singer argued that failing to contribute to disaster relief is morally equivalent to walking past a drowning toddler. Our moral obligations are much greater than almost anyone is willing to acknowledge, he wrote. It is morally wrong to buy unnecessary things – like new clothes – when we could be giving to life-saving causes.

Some people encountering Singer’s paper for the first time will try to find holes in it; others might not bother. It is much easier to file this argument alongside the other inconvenient information we’re required to ignore if we are to enjoy buying pointless stuff on this ever-warming planet. Effective altruists are different. They are generally people who have always taken morality far more seriously than their peers. They were the kids who turned vegetarian aged six and gave all their pocket money to charity, who volunteered at the local homeless shelter. Some have donated a kidney to a stranger; some volunteered to take part in Covid vaccine trials.

When the Bankman-Fried crisis broke, Julia Wise, the US-based community liaison for the Centre for Effective Altruism, said she felt “horrified and disappointed”. “So many people in the community had made really thoughtful choices about how to help others, and here was this person acting so recklessly and undoing other people’s good work,” she told me. A year on, she said most EAs were setting their outrage aside, as “all these important problems in the world are still there”.

Alongside the extreme do-gooders, EA has long fostered a darker, millenarian streak. Since Bankman-Fried’s downfall, it has faced more reputational crises. In January this year, an email written in 1997 by Nick Bostrom – a futurist philosopher whose ideas are influential within the movement – resurfaced in which he used the N-word and said he believed that black people are more stupid than white people. (He apologised for “this disgusting email from 26 years ago… [which] does not accurately represent my views then, or now”, while adding that inequality leads to differences in skills and cognitive ability.)

The following month, in a further blow for effective altruism’s reputation, Time magazine published allegations of widespread sexual harassment within the EA community.

How did a school of thought built on extreme altruism provide ideological cover for racism and sexism – and end up promoting Bankman-Fried? The person best placed to answer this would be MacAskill – he helped lay out EA’s philosophical framework, has sat on the boards of many major EA institutions and formerly described Bankman-Fried as a “collaborator” – but he has remained conspicuously quiet. He declined to be interviewed for this article, citing work commitments, and hasn’t commented on Bankman-Fried publicly since June, when he said that “even in hindsight” he had no reason to suspect the billionaire of fraud. He also posted a bizarre document he’d written in the third person titled “Will MacAskill should not be the face of EA”.

Illustration by André Carrilho

Even before he encountered Singer’s work, MacAskill demonstrated unusual moral zeal. While still a schoolboy, attending the private Hutchesons’ Grammar School in Glasgow, he worked at a care home for the elderly and volunteered with Glasgow Disabled Scouts. He studied philosophy at Cambridge and moved to Oxford as a postgraduate, where he met Toby Ord, a philosopher who was similarly committed to Singer’s challenge. Ord had pledged to give away all his earnings above £20,000. MacAskill followed suit.

As utilitarians, both men agreed that what mattered morally wasn’t the act of donating but the impact their donations would have. A £6 gift to an animal shelter would have a negligible impact – but the same donation to a malaria charity could protect a child in the developing world from a disease that kills over 600,000 people a year. One of the central insights driving EA was that most people give surprisingly little thought to the impact of their charitable giving. Donations tend to be driven by emotion – you give to a cancer charity because your grandmother died of the disease, or to a donkey charity because donkeys are cute – rather than reason. What if we tried to direct charitable funds more strategically, to make the money go further?

In 2009, Ord and MacAskill launched Giving What We Can, an organisation that encouraged people to pledge 10 per cent of their income to charity. They began with 23 members, among them Peter Singer; many of the others would go on to hold influential positions within EA organisations. One was Ben Todd, with whom MacAskill founded the careers organisation 80,000 Hours in 2011, promoting jobs that would enable graduates to give. In 2012 MacAskill and Ord founded the Centre for Effective Altruism, a charity aimed at building their movement, seeding EA chapters at universities, and encouraging the exchange of ideas at conferences and via an online EA forum. While Oxford University became a hub in the UK, EA also developed a strong presence in the US, through organisations such as GiveWell, Good Ventures and the Open Philanthropy foundation. It was also embraced by the rationalist movement, a subculture that developed around sites such as LessWrong.com and in Silicon Valley, whose adherents aim to expunge themselves of cognitive bias, applying probabilistic reasoning to every aspect of their lives.

[See also: The great attention deficit: what’s fuelling the rise in adult ADHD?]

For some people, the idea that moral choices are an intellectual problem – solvable with maths – is appealing. But how do you quantify good? In his 2015 book Doing Good Better, MacAskill proposed using “quality-adjusted life years” (QALYs), a measure used by welfare economists that attempts to quantify the detrimental effects of disabilities and diseases.

One year in perfect health is defined as 1 QALY, while one year of life with untreated Aids is 0.5, according to the weighting cited by MacAskill; for someone living with blindness, a year of life has a QALY score of 0.4; and for someone with moderate depression it is 0.3. MacAskill writes that QALYs can be used to decide which charitable causes to prioritise: faced with a choice between spending $10,000 to save a 20-year-old from blindness or the same amount on antiretroviral therapy for a 30-year-old with Aids – a treatment that will improve their life and extend it by ten years – MacAskill argues it would be better to perform the sight-saving surgery, as the 20-year-old can expect to live another 50 years. He acknowledges that QALYs are an “imperfect”, “contested” measure but sees them as mostly good enough.

And yet, using QALYs is also a scientific-sounding way of valuing the life of a sighted person over that of a blind person: it suggests that when a fire engulfs a nursery, you should save the twin with good vision first. And how do QALYs translate across species? In his 1975 book Animal Liberation, Singer argued that a chimpanzee or dog might have a greater capacity for meaningful relationships than a “severely retarded infant”; in these circumstances, Singer writes, it may be better to save the life of the animal. At what point do you stop counting?

In 2013, a leading EA argued in his graduate thesis at Rutgers University that because people in richer countries are, on average, more productive and more innovative, it is better to save their lives than those of people living in poor countries. If you find such positions repugnant, is this a mistake in the maths or the measurements – or a problem of underlying values? EA encouraged robust debate on difficult subjects, but utilitarian calculations can be co-opted to justify extremely weird and potentially harmful positions.

Early critics of EA also highlighted its convenient political quietism: instead of paying for bed nets and de-worming treatments for the global poor (two causes EA deems highly effective), shouldn’t its proponents be agitating for changes to a political system that puts someone earning the mean UK salary in the top 2 per cent of earners globally? The impact of political action is hard to account for in a spreadsheet, but is that a reason to avoid it?

“Effective altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is,” the Oxford philosopher Amia Srinivasan wrote in the London Review of Books in 2015. Noting that EAs are mostly middle-class white men, she added: “This is no doubt comforting to those who enjoy the status quo – and may in part account for the movement’s success.”

EA was the radical movement that bankers and billionaires could buy into.

In the years before the Sam Bankman-Fried crisis, EA got rich fast. In 2021 Ben Todd of 80,000 Hours estimated that around $46bn had been committed to the movement worldwide. Much of this came from donors such as the Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, who had committed to giving away a majority of their estimated $25bn fortune through their EA-affiliated organisations Open Philanthropy and Good Ventures. And then there was Bankman-Fried.

In February 2022 Bankman-Fried launched another philanthropic organisation, the FTX Future Fund. MacAskill was on the advisory board. The fund pledged to spend “up to $1bn” in its first year, and by June 2022 claimed to have spent $132m. Of these grants, $36.5m went to charities and institutions grouped under the umbrella organisation Effective Ventures UK, at least four of which were co-founded by MacAskill: Giving What We Can, the Centre for Effective Altruism, 80,000 Hours and the Global Priorities Institute, a research centre. (MacAskill also chaired the trustees of Effective Ventures UK until 21 September, the day after this article went to press.) A few mega-donors now wielded a huge amount of unaccountable power over the movement.

But few EA leaders, many of whom are academic philosophers, had practical experience of running large businesses or charities. In interviews with me, disillusioned former members accused the leadership of arrogance. “The leaders were intellectually elitist and thought that, on their intelligence alone, they could make decisions better than people who had practical, demonstrable skills,” said one – who, like many people I spoke to, requested anonymity for fear of reprisals. “I honestly felt terrified… there were no adults in the room.”

The leadership was cliquey, and professional boundaries were blurred: EA’s donors, its executive teams, trustees and many beneficiaries were long-standing friends. Former members told me the movement became a bubble: people worked together, hung out together, slept together and consulted the same EA therapists.

People were sometimes afraid to criticise EA decisions because they worried about being seen as stupid, Larissa Hesketh-Rowe, a former CEO of the Centre for Effective Altruism, told me. “It could be hard to raise concerns and criticise decisions in a community so focused on intellect and impact. It can be easy to feel that, if something raises alarm bells to you but no one else, maybe that’s because you’re not smart enough or dedicated enough. In reality, the community, like any other, has blind spots.” Hesketh-Rowe, who left in 2019, felt that “when people believe their cause is especially important and that they are especially smart, it can be all too easy to stray into finding ways to justify behaviour that stems from normal human flaws. You don’t like someone: maybe they are bad for the movement. You want a higher salary: maybe you need that to be productive.”

As ever more funding was funnelled into EA institutions, the once ascetic movement started to shift its attitude towards money. Hesketh-Rowe told me that, “once the community got more money, there were more discussions of, ‘Well, if you can save time by spending money, maybe you should. Maybe you should take a taxi instead of taking the bus or the train. Get a nicer desk, spend more to move closer to work – if it’s going to make you more productive.’” It wasn’t a unique business philosophy, but how did it fit with EA’s principles? “The line of reasoning isn’t completely wrong, but that’s what makes it risky,” said Hesketh-Rowe. “You need strong character, a good culture and leadership to navigate it, otherwise it’s too easy to accidently drift in the direction of corruption.”

This is how the movement that once agonised over the benefits of distributing $1 de-worming pills to African children ended up owning two large estates: the $3.5m Chateau Hostačov in the Czech Republic, purchased in 2022 by the small EA-affiliated European Summer Program on Rationality with a donation from Bankman-Fried’s FTX Foundation; and Wytham Abbey, a 15th-century manor house near Oxford, bought for £15m to host EA retreats and conferences. Wytham Abbey, which is undergoing restoration, was purchased by the Effective Ventures Foundation (the UK umbrella group for EA) using a £17m grant from Open Philanthropy (the US EA foundation set up by Moskovitz and Tuna).

On the EA forum, several people have questioned the “optics” of this purchase: “You’ve underestimated the damage this will do to the EA brand,” wrote one in late 2022. “The hummus and baguettes signal earnestness. Abbey signals scam.” But many defended the decision: “I’ve been to about 70 conferences in my academic career, and I’m [sic] noticed that the aesthetics, antiquity and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking,” one user wrote, adding: “I suspect that meeting in a building constructed in 1480 might help promote longtermism and multi-century thinking.”

[See also: Too big to blame: the great Libor cover-up]

Last year William MacAskill published his second popular philosophy book, What We Owe the Future, which outlined why he had come to believe in long-termism, the idea that positively influencing the future is a key moral priority of our time. The book received glowing reviews and MacAskill was, inevitably, profiled by the New Yorker and Time. He celebrated its publication with a dinner at Eleven Madison Park, a vegan restaurant in Manhattan with a $438-a-head tasting menu, hosted by Bankman-Fried.

At the start of his book MacAskill asks the reader to “imagine living, in order of birth, through the life of every human being who has ever lived”. You are the colonised and the coloniser, the slaveholder and the slave; you will begin 300,000 years ago in Africa and after you have reached the present day you will press on, inhabiting the lives of humans who are born a thousand, a million or even a trillion years from now, should mankind one day colonise the stars. He then asks: if you knew you were going to live all of these lives, what would you want humanity to do in the present? It struck me, reading this book after FTX imploded, that MacAskill was calling on readers to transcend their humanity and make moral decisions as if they were God.

One argument for longtermism is that the suffering caused by an apocalyptic event would be so enormous that we are morally required to try to avert it, even if the possibility of such an event happening is relatively small. In a 2021 paper, MacAskill suggests that a reasonable estimate for the number of future humans is 1024 – that’s 1,000,000,000,000,000,000,000,000 people who have not yet been born and to whom we owe some responsibility. (This is a conservative estimate, MacAskill notes, because should future generations succeed in creating digitally sentient beings and colonising the Milky Way, there may be 1045 future lives.) If an intervention that reduces the risk of human extinction by 1 per cent is equivalent to saving ten quadrillion (1016) lives, spending money on bed nets suddenly doesn’t seem such a good moral investment. MacAskill goes on to calculate that, for example, spending $100 on pandemic preparedness could increase the number of future beings by 200 million, while the same donation to an anti-malaria charity will save only 0.025 lives.

The Oxford philosopher William MacAskill, a leading figure in the effective altruism movement. Photo by Matt Crockett

Critics of this form of longtermism have argued that at best it’s impractical: how can we possibly know what actions would best improve the lives of people born a million years from now, or what digital beings living on Mars would want? At worst, it is sinister, because it diverts funds away from people who are suffering now, in order to address abstract, distant problems. “Longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever,” wrote Vaden Masrani, a computer scientist, in a blog response to MacAskill, noting that these estimates about the size of humanity were essentially “made-up numbers”. And yet, data analysis by the Economist found that by 2022, 40 per cent of effective altruism’s funding was directed towards longtermist causes.

Much of this money was spent on AI safety, an issue that many EAs argue is more urgent than climate change. EA’s detractors have noted that the movement’s tech donors stand to benefit from influencing the conversation around how AI is developed and regulated. Dustin Moskovitz’s Open Philanthropy was an early backer of OpenAI, developed to build safe AI that would “benefit all of humanity”. Sam Bankman-Fried was one of the largest funders of Anthropic, another leading AI company that promises to make AI safer. Anthropic was also backed by Moskovitz and the Skype founder, Jaan Tallinn, a prominent EA donor. Peter Thiel and Elon Musk, who invested early in OpenAI and DeepMind, have close ties to EA figures and have both been keynote speakers at EA conferences. (Musk has claimed that effective altruism is a “close match” for his philosophy.) Asked on the EA forum what the movement had achieved, one member replied: “[Has] played a substantial role in founding all three of the top leading AI capability companies (cry emoji).”

Earlier this year the prominent AI scientist Timnit Gebru described EA as a “dangerous cult”, one that discouraged its members from socialising with non-EA friends and family members, writing on her LinkedIn: “I don’t think people have any idea how much this cult is driving so much of AI.” She added that she and others had received messages from people who attended EA workshops in which they explored using violence – “such as murder and mail bombs” – to curb AI research. Gebru, who studies algorithmic bias, argues that by focusing attention on the future, big AI companies are distracting people from the real harms their technology is already causing.

Even before the Bankman-Fried crisis, a movement that once prided itself on an openness to criticism and debate was closing ranks. When two researchers published a paper in 2021 criticising EA’s undemocratic approach to existential risk – arguing, as Gebru does, that the future of humanity should not be decided by a handful of tech-utopians – they said they were urged by some EAs not to publish, in case it damaged funding to the cause.

On 10 November 2022, Sam Bankman-Fried tweeted that he had “f****d up”. It was an understatement. The following day, he announced that FTX had filed for bankruptcy as customers found themselves unable to withdraw their money. The five-member board of the FTX Future Fund (among them MacAskill) resigned, saying they were “shocked and immensely saddened” and “deeply regret the difficult, painful and stressful position” that the fund’s grantees were now in. The following month, Bankman-Fried was arrested. A week later the British regulator the Charity Commission opened an investigation into the Effective Ventures Foundation UK to assess the risk to its assets (it’s possible that FTX money will have to be returned) and to investigate potential conflicts of interest, as well as the way the charity was governed.

Recent lawsuits have shed more light on EA’s strange turn. During an action over Musk’s purchase of Twitter, messages were released showing that MacAskill had approached Musk offering to set up a meeting with his “collaborator” Bankman-Fried to discuss buying Twitter together and “then making it better for the world”. “You vouch for him?” Musk asked. “Very much so! Very dedicated to making the long-term future of humanity go well,” MacAskill replied, directing Musk to a tweet announcing the creation of the FTX Future Fund, “in case you want to get a feel for Sam”.

[See also: The algorithms quietly stoking inflation]

Yet in July 2023, a lawsuit filed in a federal bankruptcy court highlighted the “frequently misguided and sometimes dystopian” projects of Bankman-Fried’s FTX Foundation (which seeded the FTX Future Fund). The foundation, according to the complaint submitted to the court, was “a purported charity that served little purpose other than to enhance the public stature of [the] defendants”. The filing pointed to a $300,000 grant given in May 2022 for an individual to write a book “on how to figure out what humans’ utility function is”, and a $400,000 grant to a group that published YouTube videos about EA. The same court document referenced a memo in which Bankman-Fried’s brother Gabe appeared to discuss plans to purchase the Pacific island of Nauru, in order to build a “bunker/shelter” that would ensure “most EAs survive” an event that wiped out between half and 99.99 per cent of the global population.

In July I went to see William MacAskill speak at London’s Southbank Centre, where he – tall and boyish-looking, with a Scottish burr – shared a stage with the Oxford data scientist Hannah Ritchie and the YouTuber Ali Abdaal, who moderated. Abdaal asked flattering questions about MacAskill’s generosity, and it was a while before it was made clear that everyone on stage was sympathetic towards effective altruism. FTX and Sam Bankman-Fried were not mentioned or raised in questions from the audience. I sat close to a young woman who laughed heartily at MacAskill’s jokes and approached me afterwards to ask, with evangelical keenness, why I was taking notes. She worked for an EA organisation, too, and when I told her I was a journalist her smile turned rictus.

No one has suggested that senior EAs knew of Bankman-Fried’s fraud, but my conversations corroborated investigations by Time and the New Yorker suggesting that MacAskill and others had been repeatedly warned about Bankman-Fried’s business practices. Some of these warnings date back to 2018, when half the staff of Alameda Research – a crypto-trading firm co-founded by Bankman-Fried and staffed almost entirely with EAs – quit, accusing Bankman-Fried of being “negligent”, “unethical” and “misreporting numbers”.

There was a marked lack of curiosity, one former senior EA told me. “When a bunch of people who are trusted in the community leave and say that [Bankman-Fried is untrustworthy], if you didn’t investigate that thoroughly you’re being extremely foolish. And if you did, I think you would have known that there were reasons to at least be very concerned about the possible blowback of making him the poster boy of the movement.”

The former EA observed that earlier in 2022 another crypto billionaire affiliated with the movement, the British entrepreneur Ben Delo, pleaded guilty before a US court for failing to implement sufficient anti-money-laundering checks – an incident that ought to have prompted EA’s leadership to consider the risks of associating with the super-rich. The leadership’s judgement had been clouded by greed, said the former EA: “but a weird kind of greed. It’s greed to have more power and money behind your ideology – it’s philosopher’s greed.”

In the aftermath of FTX’s collapse, some EAs wondered why people within the community had shared stories extolling Bankman-Fried’s frugality – in media profiles, he was the billionaire who slept on a beanbag, lived in a house share and drove a Corolla – when so many had visited his $40m penthouse in the Bahamas. Some former EAs suggested to me that perhaps the disconnect between the legend and the reality was overlooked because Bankman-Fried spoke the right language: he was an admirably “hardcore utilitarian”.

The entrepreneur Lauren Remington Platt (left), Sam Bankman-Fried (centre) and the model Gisele Bündchen at a crypto festival in the Bahamas, April 2022. Photo by Photo by Joe Schildhorn / BFA.com / Shutterstock

In June this year, MacAskill seemed to concede that he had been naive. Writing on the EA forum, he said: “Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.” The extent of Bankman-Fried’s sincerity is unclear: was he reckless because he believed the ends justified the means, or did he use EA to burnish his reputation? In a Vox interview conducted via Twitter DMs shortly after FTX collapsed, he suggested the latter: the “ethics stuff” was “a dumb game we woke Westerners play where we say all the right shiboleths [sic] and so everyone likes us”.

The former EAs I spoke to expressed anger at the movement’s leadership for falling silent in the aftermath. “Everybody’s fleeing the community and it makes me quite angry,” said one, “because these people were all happy to take talent, take status, while the times were good. And as soon as the times were bad, when they needed to step up, they decided to get on the first boat.” They cited MacAskill saying he shouldn’t be “the face of EA” and Holden Karnofsky, a co-CEO of Open Philanthropy, writing on forums that EA “isn’t a great brand for wide public outreach”.

Another ex-EA told me the movement needed to do more to care for its members. “There’s a tendency to encourage young, naive people to put themselves in risky situations, to make a lot of personal and financial sacrifices, and then to provide no support or safety net. Because suddenly you can be on the out – you can be labelled as a person who is not that smart, who doesn’t have good judgement, or who doesn’t have integrity.” For many, EA is their identity and their social and professional network, they said. “It means that if it goes wrong, your entire life falls apart.” Many people described struggling emotionally after leaving, because it was hard to shake the mindset, to still derive meaning and self-worth when you are no longer “maximising good”.

Now that Bankman-Fried’s trial is over, or when the Charity Commission’s investigation closes, perhaps William MacAskill will finally say more. It will be tempting for effective altruism’s defenders to continue to portray Sam Bankman-Fried as a rogue actor. But it would be more honest to acknowledge the role the movement played in the crisis, and the risks that come from allowing a few billionaires to determine how to do the “most good”. For their community to survive in a recognisable form, those involved in EA will have to grapple with its politics, and find some way to reconcile its anti-poverty idealism and its tendency towards Silicon Valley millenarianism. No graph or calculation will help them do that.

For now, the steady, determined work of the effective altruists continues – but not without frustration. “I am tired of having a few people put on pedestals because they are very smart – or very good at self-promotion,” one member wrote on the EA forum this year. “I am tired of being in a position where I have to apologise for sexism, racism and other toxic ideologies within this movement… I am tired of billionaires. And I am really, really tired of seeing people publicly defend bad behaviour as good epistemics.” A vegan social worker, she had committed to giving away over 10 per cent of her income. She wanted the world to be “a better, kinder, softer place”, she explained, so “I’m not quitting. But I am tired.”

[See also: The rise of the new tech right]

Content from our partners
Can Britain quit smoking for good? - with Philip Morris International
What is the UK’s vision for its tech sector?
Inside the UK's enduring love for chocolate

This article appears in the 20 Sep 2023 issue of the New Statesman, The Rise and Fall of the Great Powers

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU