Getty
Show Hide image

Ed Smith: Why you should give up your smartphone

Silicon Valley has us hooked on digital dope. A “dumb phone” is one way to break that addiction.

“Pushers” have contempt for ­“users”. They rely on them for profit but interpret the relationship as the justified exploitation of stupidity by intelligence. Drugs are an obvious example, ­captured during a classic exchange in the US television series The Wire.

When one user asks his dealer for a hit, the pusher replies: “It ain’t even nine, and you fiending on it . . . He’s a goddam drug addict.” Separately, when a drug lord thinks a junior colleague is losing his judgement, he asks furiously: “You ain’t using, are you?” Using is for other people: using is the business, refraining is the lifestyle.

I thought of The Wire as I was chatting with a senior executive of a soft drinks company at a corporate function. Slim and elegant, she waved away the sugary or alcoholic drinks, favouring sparkling water. Sugar was for other people.

To drugs and sugar, I’d add a third: digital dope, the rampant addiction of our time.

My favourite story from Silicon Valley relates to Jack Dorsey, the Twitter co-founder. How does Dorsey start his day? With a big hit of social media, a chunky dose of the gear he pushes to us, a fat line of digital chemicals applied to his bleary brain?

You must be joking. Dorsey kicks off every day, unfailingly, with meditation followed by a workout. He avoids checking emails until the evening. The man has got to think, you know. Dorsey, a devotee of austere food diets, retains his discipline in digital life. Like a host wandering around a party foisting doughnuts on his guests, he is privately sipping super-juices.

Protected from distractions, Dorsey dedicates his talents to finding ways to hook users into spending more time on his platform – monetising those goddam addicts.

Well, I am pushing back at the pushers and their push-notifications. There is nothing cool about the values being pushed to us by Silicon Valley. Under the pretence of joining something liberating and modern, we are giving up information and control to a class of secretive billionaires. This dichotomy will define the next generation: disciplined people creating addictions for ill-disciplined ones, and then profiting from that dependency.

The preposterous share offering by Snap (its Snapchat app allows users to send naked pictures that automatically “dissolve”) is a classic of the genre. Devise a product, whether or not it makes the world better; inflate a bubble; never make a profit; finally, having flogged the gear to stupid users, then flog it to even stupider investors by selling them shares that don’t even give them a stake in the running of the company.

Just before Christmas, I accidentally smashed the screen of my smartphone. It came back from repair with all the apps deleted. I liked app-free life and kept the phone clean. But my email was still on my smartphone. And, like most people, when I know that my emails are in my pocket I find it hard not to check them.

No longer. I’ve acquired a “dumb phone”, made by the Swiss company Punkt. This phone does two things: calls and texts. It has zero digital capacity. Emails and apps are impossible. If I want to check my emails, it must be a conscious decision, not a habit.

I’ve kept my smartphone. If I’m travelling and likely to need email access, I swap the sim card back into the smartphone. But evenings, weekends and writing days are now mostly free from emails and apps.

Since the dumb phone arrived, I’ve noticed two opposite but connected developments. First, it has helped sustained concentration, escaping into my writer’s world. Second, I’ve found it easier to switch completely into work-free family life. Days are more productive and less stressful. That’s because the addictive drug of mildly stimulating but usually meaningless ­curiosity – spreading through emails, tweets and
posts – is a barrier to entering a more interesting psychological space.

Cognitive reasoning relies just as much on empty space as it does on stimulation. Ideas may be initiated by information, but to grow and develop they rely on spare capacity. Fed constant information, we have no hunger to think deeply; dosed up on sweets, we’re giving up proper meals. The people pushing the pings are flogging self-medicated paralysis.

The smartphone is ten years old. A decade – that is our evolutionary exposure to living in the digital sweetshop, our hands just a moment away from another fluffy hit of sugary vapidity. We are not doing a great job of saying no.

That was the central insight of Petter Neby, Punkt’s founder. After a series of losing battles with his teenage stepdaughter about smartphone addiction, Neby then looked at himself. Checking his emails late at night, he was taking those problems to bed with him. “Technology is not the problem,” he explained to me last week. “We’re benefiting from it in this conversation now. But we must be the master of technology, not it the master of us.

“I could have written a book about it, but I’m an entrepreneur, not a writer,” Neby said. “I had to find my way to do something I’m in disagreement with.”

The shocking thing about the ubiquity of digital life is the paucity of our ambition. Can’t we choose a more exciting lifestyle model than fiddling with a small rectangle? Go back to square one – how do I want to live? – and ask yourself, “Who would advocate smartphone addiction?”

Intermittent fasting, we now know, is far healthier than any degree of “healthy eating”. It’s the same with our digital diet. Need inspiration? Think of the Silicon Valley mogul – post-meditation and post-workout – laughing at you fumbling for your phone first thing in the morning. “Ain’t even six,” he says to his masseuse, “and they’re fiending on it. Goddam addicts!”

Ed Smith is a journalist and author, most recently of Luck. He is a former professional cricketer and played for both Middlesex and England.

This article first appeared in the 09 February 2016 issue of the New Statesman, The May Doctrine

Show Hide image

YouTube announces new measures against extremism – but where do they leave the far right?

Videos by alt-right commentators have arguably radicalised many online. Will Google's latest policies do anything to change this?

Within hours of the terrorist attack in Finsbury Park, Tommy Robinson was trending on Twitter. The former leader of the English Defence League accused the Finsbury Park mosque of “creating terrorists” in a series of tweets on his personal account.

More than 17,400 people have now tweeted about the 34-year-old, with many theorising he could have radicalised the attacker who allegedly shouted “I’m going to kill all Muslims” at the scene. At present, there is no evidence that the man arrested by police on suspicion of attempted murder is a fan of Robinson.

“People are saying I’m inciting hate,” said Robinson in a video uploaded to Twitter and YouTube after the attack. “I just tell the facts and the truth and I’m not going to apologise for that…

“If giving you quotes from the Quran that incite murder and war against us is inciting hate, I’m guilty. If telling you all the problematic problems that come from the teachings and scriptures of Islam, I’m guilty. But these are just facts.”

After describing the country as being at “war”, he goes on to say: “Please one person, just one, give me one example of me inciting hate.”

When we talk about radicalisation and terrorism, we are finally to understand that this extends beyond the work of Isis.

Just over a year ago, Labour MP Jo Cox was murdered by a white supremacist. This morning, Harry Potter author JK Rowling used Twitter to accuse columnist Katie Hopkins of contributing to radicalisation. The New Statesman’s own Media Mole notes how right-wing tabloids incite hate.

In particular, it is now evident how the far right radicalises online. In December 2016, a man fired three shots in a Washington DC pizza parlour that the alt-right (on 4Chan and YouTube) had accused of being at the centre of a paedophile ring.

The internet arguably allowed Anders Breivik, the Norwegian far right white supremacist who killed 77 people in 2011, to cultivate his extreme views. Alexandre Bissonnette, the white nationalist who murdered six men at a Québec City mosque in January, was described by many as an “internet troll”.

Earlier this year, a report by the Commons home affairs committee accused social media giants of not doing enough to tackle terrorism online. In response to this – and following a series of high-profile brands pulling their advertising from YouTube after it was featured on or by terrorism-related videos – Google, which owns the video-sharing site, has now announced four steps it is taking to fight online terror. But do these reflect the reality that there are many forms of extremism?

Google’s new guidelines speak of “terrorism” and “extremism” in broad terms. This means that videos glorifying or inciting terrorism will be treated the same whether they are from the far right, far left, or pro-Isis organisations.

Google’s four steps for tackling such videos include: using machine learning to identify videos glorifying violence, using a team of human flaggers to identify problematic videos, and using a "redirect method" to send potential Isis recruits towards anti-terror videos. Each of these steps is concerned with content that either breaks the law or violates YouTube’s policies.

The fourth step (or rather the third, as it is ordered in Google’s blogpost) is focused on non-illegal, non-policy violating content. For example, this could include videos that don’t directly incite terrorism, but arguably incite hate, such as those denying the Holocaust.

According to Kent Walker, Google’s general counsel, these could also be “videos that contain inflammatory religious or supremacist content”. Rather than being removed like the other offending videos, these will be hidden behind a warning, not have adverts on them (therefore preventing their creators from making money), and will not be eligible for comments. Essentially, as Walker writes, “that means these videos will have less engagement and be harder to find”.

It remains to be seen whether – or how – this will apply to the content of Tommy Robinson. YouTube’s steps will be taken on a video-by-video basis, meaning no far right commentator will be banned outright. Instead, YouTube simply won’t promote any offending videos, meaning they will not appear in their subscribers’ recommended feeds and will be difficult to find on the site.

In this way, Google has remained committed to free speech while doing more to tackle extremism on YouTube. Those like Robinson who claim to just “tell the facts” could arguably now be held to account for their actions. Many on the far right are careful to not explicitly advocate violence. Nevertheless, the loaded language used in their videos could arguably incite hate.

Paul Joseph Watson, a right-wing conspiracy theorist YouTuber with nearly one million subscribers, has never advocated terrorism, but has videos entitled “Islam is NOT a Religion of Peace” and “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”.

In the past I have argued that allowing Google and YouTube to censor us in the name of “extremism” and “terrorism” is a troubling trend, but with these new promises, the company has walked the delicate line between the law and free speech. By allowing hateful, but not illegal, content to be hosted on its site and yet restricted from a wider audience, YouTube is taking a stand against extremists of all kinds.

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496