What makes us alive? Moreover, what makes us dead?

When it comes to death, science is part of the problem as well as part of the solution. Deepening our understanding of the body’s processes and learning how to keep them going longer has complicated and obfuscated the end of life.

There’s a claustrophobic moment in the new film of Stephen Hawking’s life when he describes his wife being given the option to let him die. It was 1985 and A Brief History of Time was a still-unpublished manuscript. Hawking had been hospitalised with pneumonia. He was placed on a life-support machine and put into a drug-induced coma. The doctors asked Jane Hawking if she wanted them to turn off the machine.
 
We can all be glad she said no, otherwise the planet would have been much the poorer for the past 28 years. Nonetheless, the shadow of death hangs over the whole film. One day – and it may not be many years away – Hawking will be no more. His declaration in September that assisted suicide should be possible without fear of prosecution suggests he might be squaring up to the idea.
 
Death seems to be the one thing that sets human beings apart: we are aware, unlike most (if not all) other animals, of our impending demise. Worse – as Jane Hawking knows too well – in this technological age, we have to make fine decisions about death. And here the advance of science seems to offer more hindrance than help.
 
Death is not what it was. Until half a century ago if you couldn’t breathe, you would soon be officially dead. Then someone invented the ventilator. Is a body that needs a machine to operate its lungs still alive? For sure, we now say.
 
It’s no longer the case that the heart has any jurisdiction over whether you’re dead. Remember the Bolton Wanderers footballer Fabrice Muamba? His heart stopped for 78 minutes but then defibrillation got it started again. It’s a testimony to our scientific resourcefulness that we have learned how to choreograph the pulses of electrical current that will kick-start a long-immobile heart. Nonetheless, this, too, has complicated the notion of being “alive”.
 
Even what has been termed “brain death” is not enough. A lack of electrical activity inside your skull is not a sign that your brain cells are all dead. It takes up to eight hours to start dying and you can lose a lot of them before significant damage ensues. What’s more, damage to some cells makes permanent loss of consciousness inevitable. But damage to some others isn’t much of a problem.
 
Perhaps the most extreme technological management of death is among those who have paid to have their bodies frozen. Their hope is that future technologies will be able to defrost them and repair the damage that freezing cells full of water inevitably causes. This is not the last refuge of the frightened fool: plenty of our finest minds, including the MIT professor of artificial intelligence Marvin Minsky, have signed up to be cryo-preserved.
 
So, when it comes to death, science is part of the problem as well as part of the solution. Deepening our understanding of the body’s processes and learning how to keep them going longer has complicated and obfuscated the end of life. That’s why a few researchers have suggested that doctors are no longer qualified to make life-and-death decisions. Robert Veatch, a medical ethicist at Georgetown University, goes further: he thinks you should be allowed to come up with your own definition of death and inscribe it in a living will for others to respect.
 
It would certainly be nice to have a say – especially when you can see it coming. Long live Stephen Hawking. As long as he wants, that is.
Science has complicated death. Image: Getty

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 30 September 2013 issue of the New Statesman, The Tory Game of Thrones

Getty
Show Hide image

A quote-by-quote analysis of how little Jeremy Hunt understands technology

Can social media giants really implement the health secretary’s sexting suggestions? 

In today’s “Did we do something wrong? No, it was social media” news, Health Secretary Jeremy Hunt has argued that technology companies need to do more to prevent sexting and cyber-bullying.

Hunt, whose job it is to help reduce the teenage suicide rate, argued that the onus for reducing the teenage suicide rate should fall on social media companies such as Facebook and Twitter.

Giving evidence to the Commons Health Committee on suicide prevention, Hunt said: “I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem.”

Pause for screaming and/or tearing out of hair.

Don’t worry though; Hunt wasn’t simply trying to pass the buck, despite the committee suggesting he direct more resources to suicide prevention, as he offered extremely well-thought out technological solutions that are in no way inferior to providing better sex education for children. Here’s a quote-by-quote analysis of just how technologically savvy Hunt is.

***

“I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18…”

Here’s Hunt asking himself a question that he should be asking the actual experts, which is in no way a waste of anybody’s time at all.

“… If that’s a lock that parents choose to put on a mobile phone contract…”

A lock! But of course. But what should we lock, Jeremy? Should teenager’s phones come with a ban on all social media apps, and for good measure, a block on the use of the camera app itself? It’s hard to see how this would lead to the use of dubious applications that have significantly less security than giants such as Facebook and Snapchat. Well done.

“Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.”

Erm, is there? Image recognition technology does exist, but it’s incredibly complex and expensive, and companies often rely on other information (such as URLs, tags, and hashes) to filter out and identify explicit images. In addition, social media sites like Facebook rely on their users to click the button that identifies an image as an abuse of their guidelines, and then have a human team that look through reported images. The technology is simply unable to identify individual and unique images that teenagers take of their own bodies, and the idea of a human team tackling the job is preposterous. 

But suppose the technology did exist that could flawlessly scan a picture for fleshy bits and bobs? As a tool to prevent sexting, this still is extremely flawed. What if two teens were trying to message one another Titian’s Venus for art or history class? In September, Facebook itself was forced to U-turn after removing the historical “napalm girl” photo from the site.

As for the second part of Jezza’s suggestion, if you can’t identify it, you can’t block it. Facebook Messenger already blocks you from sending pornographic links, but this again relies on analysis of the URLs rather than the content within them. Other messaging services, such as Whatsapp, offer end-to-end encryption (EE2E), meaning – most likely to Hunt’s chagrin – the messages sent on them are not stored nor easily accessed by the government.

“I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening.”

Jeremy, Jeremy, Jeremy, Jeremy, can’t you spot your problem yet? You’ve got to stop asking yourself!

There is simply no algorithm yet intelligent enough to identify bullying language. Why? Because we call our best mate “dickhead” and our worst enemy “pal”. Human language and meaning is infinitely complex, and scanning for certain words would almost definitely lead to false positives. As Labour MP Thangam Debbonaire famously learned this year, even humans can’t always identify whether language is offensive, so what chance does an algorithm stand?

(Side note: It is also amusing to imagine that Hunt could even begin to keep up with teenage slang in this scenario.)

Many also argue that because social media sites can remove copyrighted files efficiently, they should get better at removing abusive language. This is a flawed argument because it is easy to search for a specific file (copyright holders will often send social media giants hashed files which they can then search for on their databases) whereas (for the reasons outlined above) it is exceptionally difficult for algorithms to accurately identify the true meaning of language.

“I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.”

Leaving aside the fact that social media companies constantly come up with solutions for these problems, Hunt has left us with the burning question of whether any of this is even desirable at all.

Why should he prevent under-18s from sexting when the age of consent in the UK is 16? Where has this sudden moral panic about pornography come from? Are the government laying the ground for mass censorship? If two consenting teenagers want to send each other these aubergine emoji a couple of times a week, why should we stop them? Is it not up to parents, rather than the government, to survey and supervise their children’s online activities? Would education, with all of this in mind, not be the better option? Won't somebody please think of the children? 

“There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

Alas, if only we could say the same for you Mr Hunt.

Amelia Tait is a technology and digital culture writer at the New Statesman.