23andMe Co-Founder Anne Wojcicki. Photo: Kimberly White/Getty Images
Show Hide image

23andMe: Why bother with predictions about yourself when you are almost certainly average?

Want to understand your genes? Call your parents.

Why bother with predictions about yourself when you are almost certainly average? Alongside the columns packed with advice for creating a healthy new you, you will have seen at least one account of “everything that will matter in 2015”. You shouldn’t take either too seriously. Remember that the news predictions you read at the start of 2014 failed to foresee, say, a Russian invasion of Ukraine, the ebola epidemic, or the rise of Ukip.

It’s a truism to say that prediction is difficult. What is interesting is just how compelling human beings find any kind of window on the future. It’s a trait that the genetic analysis service 23andMe depends on.

The company offers an analysis of the DNA in your saliva. For £125, you can find out information about yourself such as whether you are a carrier of certain inherited conditions and your risk of developing particular kinds of disease. The company says that its reports are “for informational purposes only and do not diagnose disease or illness”, but that they will help you “make better lifestyle choices and appropriately monitor your health”.

Experts are divided on the value of 23andMe’s services. In November 2013 the US Food and Drug Administration ordered the firm to “immediately discontinue” marketing, though ancestry and raw genetic data tests are still available. Ahead of the UK launch, the Science Media Centre gathered the opinions of various UK geneticists. Shirley Hodgson, professor of cancer genetics at St George’s Hospital in London, said 23andMe’s tests were “very open to misunderstanding” and could lead to wasted NHS time.

According to the Cambridge geneticist Eric Miska, 23andMe can give us “a glimpse of the fun, excitement and risks associated with human genome data” but we ought to discuss how our personal genome should be accessed and shared. As Google is one of 23andMe’s financial backers, that’s a conversation we need to have sooner rather than later.

Privacy issues aside, 23andMe has very limited powers when it comes to drawing conclusions from your genetic information. Accurate predictions rely on accurate information. Where genes are concerned, certainty is rare and surprises are common.

Take a study published at the end of last year in the Proceedings of the National Academy of Sciences. The study looked at data collected over a 30-year period and found a strong correlation between possession of a particular gene variant (the rs993609 variant of the FTO gene, to be precise) and a higher-than-desirable body-mass index.

That was expected: it is what we have seen in recent studies. What wasn’t expected was the absence of this correlation in subjects born before 1942. Only when we began to do less manual work, rely more on technology and have better access to food resources did the genes have something to work with.

So, even with a genetic predisposition, obesity was far from inevitable in the pre-war years. For most health conditions, environment and lifestyle matter far more than the details of your genome.

What’s more, most of 23andMe’s customers will surely find themselves devoid of any meaningful predisposition: the odds are that you are decidedly average. So, whatever your genetic destiny, wherever and whenever you happen to be living, a life of moderate consumption and moderate exercise is probably the best prescription. If that’s too banal and you really want to harness technology to predict the impact of your genes on your future health, pick up your miracle smartphone. Use it to call your parents and ask them how they are. No one knows more than they do about the troubles coming your way. 

Editor’s note, 20 January 2015: The FDA has ordered 23andMe to cease trading its health tests. Ancestry and raw genetic data tests are still available in the US. A change to the text was made to reflect this.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 08 January 2015 issue of the New Statesman, The Churchill Myth

Show Hide image

Don’t shoot the messenger: are social media giants really “consciously failing” to tackle extremism?

MPs today accused social media companies of failing to combat terrorism, but just how accurate is this claim? 

Today’s home affairs committee report, which said that internet giants such as Twitter, Facebook, and YouTube are “consciously failing” to combat extremism, was criticised by terrorism experts almost immediately.

“Blaming Facebook, Google or Twitter for this phenomenon is quite simplistic, and I'd even say misleading,” Professor Peter Neumann, an expert on radicalisation from Kings College London, told the BBC.

“Social media companies are doing a lot more now than they used to - no doubt because of public pressure,” he went on. The report, however, labels the 14 million videos Google have removed in the last two years, and the 125,000 accounts Twitter has suspended in the last one, a “drop in the ocean”.

It didn’t take long for the sites involved to refute the claims, which follow a 12-month inquiry on radicalisation. A Facebook spokesperson said they deal “swiftly and robustly with reports of terrorism-related content”, whilst YouTube said they take their role in combating the spread of extremism “very seriously”. This time last week, Twitter announced that they’d suspended 235,000 accounts for promoting terrorism in the last six months, which is incidentally after the committee stopped counting in February.

When it comes to numbers, it’s difficult to determine what is and isn’t enough. There is no magical number of Terrorists On The Internet that experts can compare the number of deletions to. But it’s also important to judge the companies’ efforts within the realm of what is actually possible.

“The argument is that because Facebook and Twitter are very good at taking down copyright claims they should be better at tackling extremism,” says Jamie Bartlett, Director of the Centre for the Analysis of Social Media at Demos.

“But in those cases you are given a hashed file by the copyright holder and they say: ‘Find this file on your database and remove it please’. This is very different from extremism. You’re talking about complicated nuanced linguistic patterns each of which are usually unique, and are very hard for an algorithm to determine.”

Bartlett explains that a large team of people would have to work on building this algorithm by trawling through cases of extremist language, which, as Thangam Debonnaire learned this month, even humans can struggle to identify.  

“The problem is when you’re dealing with linguistic patterns even the best algorithms work at 70 per cent accuracy. You’d have so many false positives, and you’d end up needing to have another huge team of people that would be checking all of it. It’s such a much harder task than people think.”

Finding and deleting terrorist content is also only half of the battle. When it comes to videos and images, thousands of people could have downloaded them before they were deleted. During his research, Bartlett has also discovered that when one extremist account is deleted, another inevitably pops up in its place.

“Censorship is close to impossible,” he wrote in a Medium post in February. “I’ve been taking a look at how ISIL are using Twitter. I found one user name, @xcxcx162, who had no less than twenty-one versions of his name, all lined up and ready to use (@xcxcx1627; @xcxcx1628, @xcxcx1629, and so on).”

Beneath all this, there might be another, fundamental flaw in the report’s assumptions. Demos argue that there is no firm evidence that online material actually radicalises people, and that much of the material extremists view and share is often from mainstream news outlets.

But even if total censorship was possible, that doesn’t necessarily make it desirable. Bartlett argues that deleting extreme content would diminish our critical faculties, and that exposing people to it allows them to see for themselves that terrorists are “narcissistic, murderous, thuggish, irreligious brutes.” Complete censorship would also ruin social media for innocent people.

“All the big social media platforms operate on a very important principal, which is that they are not responsible for the content that is placed on their platforms,” he says. “It rests with the user because if they were legally responsible for everything that’s on their platform – and this is a legal ruling in the US – they would have to check every single thing before it was posted. Given that Facebook deals with billions of posts a day that would be the end of the entire social media infrastructure.

“That’s the kind of trade off we’d be talking about here. The benefits of those platforms are considerable and you’d be punishing a lot of innocent people.”

No one is denying that social media companies should do as much as they can to tackle terrorism. Bartlett thinks that platforms can do more to remove information under warrant or hand over data when the police require it, and making online policing 24/7 is an important development “because terrorists do not work 9 to 5”. At the end of the day, however, it’s important for the government to accept technological limitations.

“Censorship of the internet is only going to get harder and harder,” he says. “Our best hope is that people are critical and discerning and that is where I would like the effort to be.” 

Amelia Tait is a technology and digital culture writer at the New Statesman.