STUART KINLOUGH
Show Hide image

Head in the cloud

As we download ever more of our lives on to electronic devices, are we destroying our own internal memory?

I do not remember my husband’s tele­phone number, or my best friend’s address. I have forgotten my cousin’s birthday, my seven times table, the date my grandfather died. When I write, I keep at least a dozen internet tabs open to look up names and facts I should easily be able to recall. There are so many things I no longer know, simple things that matter to me in practical and personal ways, yet I usually get by just fine. Apart from the few occasions when my phone has run out of battery at a crucial moment, or the day I accidentally plunged it into hot tea, or the evening my handbag was stolen, it hasn’t seemed to matter that I have downloaded most of my working memory on to electronic devices. It feels a small inconvenience, given that I can access information equivalent to tens of billions of books on a gadget that fits into my back pocket.

For thousands of years, human beings have relied on stone tablets, scrolls, books or Post-it notes to remember things that their minds cannot retain, but there is something profoundly different about the way we remember and forget in the internet age. It is not only our memory of facts that is changing. Our episodic memory, the mind’s ability to relive past experiences – the surprising sting of an old humiliation revisited, the thrill and discomfort of a first kiss, those seemingly endless childhood summers – is affected, too. The average Briton now spends almost nine hours a day staring at their phone, computer or television, and when more of our lives are lived on screen, more of our memories will be formed there. We are recording more about ourselves and our experiences than ever before, and though in the past this required deliberate effort, such as sitting down to write a diary, or filing away a letter, or posing for a portrait, today this process can be effortless, even unintentional. Never before have people had access to such comprehensive and accurate personal histories – and so little power to rewrite them.

My internet history faithfully documents my desktop meanderings, even when I resurface from hours of browsing with little memory of where I have been or what I have read. My Gmail account now contains over 35,000 emails received since 2005. It has preserved the banal – long-expired special offers, obsolete arrangements for post-work drinks – alongside the life-changing. Loves and break-ups are chronicled here; jobs, births and weddings are announced; deaths are grieved. My Facebook profile page has developed into a crowdsourced, if assiduously edited, photo album of my social life over the past decade. My phone is a museum of quick-fire text exchanges. With a few clicks, I can retrieve, in mind-numbing detail, information about my previous movements, thoughts and feelings. So could someone else. Even my most private digital memories are not mine alone. They have become data to be restructured, repackaged, aggregated, copied, deleted, monetised or sold by internet firms. Our digital memories extend far beyond our reach.

In the late 1990s the philosopher David Chalmers coined the term “the extended mind” to describe how when we use pen and paper, calculators, or laptops to help us think or remember, these external objects are incorporated into our cognitive processes. “The technology we use becomes part of our minds, extending our minds and indeed our selves into the world,” Chalmers said in a 2011 Ted talk. Our iPhones have not been physically implanted into our brains, he explained, but it’s as if they have been. There’s a big difference between offloading memory on to a notepad and doing it on to a smartphone. One is a passive receptacle, the other is active. A notebook won’t reorganise the information you give it or ping you an alert; its layout and functions won’t change overnight; its contents aren’t part-owned by the stationery firm that made it. The more we extend our minds online, the harder it is becoming to keep control of our digital pasts, or to tell where our memories begin or end. And, while society’s collective memory is expanding at an astonishing rate, our internal, individual ones are shrinking.

***

Our brains are lazy; we are reluctant to remember things when we can in effect delegate the task to someone or something else. You can observe this by listening to couples, who often consult one another’s memories: “What was the name of that nice Chinese restaurant we went to the other day?” Subconsciously, partners distribute responsibility for remembering information according to each other’s strengths. I ask my husband for directions, he consults me on people’s names.

In one study conducted in 1991, psychologists assigned a series of memory exercises to pairs of students, some of whom had been dating for at least three months and some of whom did not know one another. The dating couples remembered more than the non-dating pairs. They also remembered more unique information; when a fact fell into their partner’s area of expertise, they were more likely to forget it.

In a similar way, when we know that a computer can remember something for us we are less likely to remember it ourselves. For a study published by the journal Science in 1991, people were asked to type some trivia facts into a computer. Those who believed the facts would be saved at the end of the experiment remembered less than those who thought they would be deleted – even when they were explicitly asked to memorise them. In an era when technology is doing ever more remembering, it is unsurprising that we are more inclined to forget.

It is sometimes suggested that in time the worry that the internet is making us forgetful will sound as silly as early fears that books would do the same. But the internet is not an incremental step in the progression of written culture, it is revolutionising the way we consume information. When you pull an encyclopaedia down from a library shelf, it is obvious that you are retrieving a fact you have forgotten, or never knew. Google is so fast and easy to use that we can forget we have consulted it at all: we are at risk of confusing the internet’s memory with our own. A Harvard University project in 2013 found that when people were allowed to use Google to check their answers to trivia questions they rated their own intelligence and memories more highly – even if they were given artificially low test results. Students usually believed more often that Google was confirming a fact they already knew, rather than providing them with new information.

This changed when Adrian Ward, now an assistant professor at the University of Austin, who designed the study as part of his PhD research, mimicked a slow internet connection so that students were forced to wait 25 seconds to read the answer to a Google query. The delay, he noted, stripped them of the “feeling of knowing” because they became more aware that they were consulting an external source. In the internet age, Ward writes, people “may offload more and more information while losing sight of the distinction between information stored in their minds and information stored online”.

By blurring the distinction between our personal and our digital memories, modern technology could encourage intellectual complacency, making people less curious about new information because they feel they already know it, and less likely to pay attention to detail because our computers are remembering it. What if the same could be said for our own lives: are we less attentive to our experiences because we know that computers will record them for us?

An experiment by the American psychologist Linda Henkel suggests this could be the case; she has found that when people take photographs at museums they are more likely to forget details of what they have seen. To some extent, we’re all tourists exploring the world from behind a camera, too distracted by our digital memories to inhabit our analogue lives fully. Relying on computers to remember telephone numbers or trivia does not seem to deprive our internal memories of too much – provided you can remember where you’ve stored it, factual information is fairly straightforward to retrieve. Yet a digital memory is a poor substitute for the richness of a personal experience revisited, and our autobiographical memories cannot be “retrieved” by opening the relevant online file.

Our relationship with the past is capricious. Sometimes an old photograph can appear completely unfamiliar, while at other times the faintest hint – the smell of an ex-lover’s perfume on a crowded Tube carriage – can induce overwhelming nostalgia. Remembering is in part a feeling, of recognition, of having been there, of reinhabiting a former self. This feeling is misleading; we often imagine memories offer an authentic insight into our past. They do not.

Memory is closely linked to self-identity, but it is a poor personal record. Remembering is a creative act. It is closely linked to imagining. When people suffer from dementia they are often robbed not only of the past but also of the future; without memory it is hard to construct an idea of future events. We often mistakenly convert our imaginings into memories – scientists call the process “imagination inflation”. This puts biological memories at odds with digital ones. While memories stored online can be retrieved intact, our internal memories are constantly changing and evolving. Each time we relive a memory, we reconfigure it to suit our present needs and world-view. In his book Pieces of Light, an exploration of the new science of memory, the neuroscientist Charles Fernyhough compares the construction of memory to storytelling. To impose meaning on to our chaotic, complex lives we need to know which sections to abridge and which details can be ignored. “We are all natural born storytellers. We are constantly editing and remaking our memory stories as our knowledge and emotions change. They may be fictions, but they are our fictions,” Fernyhough writes.

We do not write these stories alone. The human mind is suggestible. In 2013, scientists at MIT made international headlines when they said they had successfully implanted a false memory into a mouse using a form of light stimulation, but human beings implant false memories into each other all the time, using more low-tech methods. Friends and family members are forever distorting one another’s memories. I remember distinctly being teased for my Dutch accent at school and indignantly telling my mother when I arrived home that, “It’s pronounced one, two, three. Not one, two, tree.” My brother is sure it was him. The anecdote is tightly woven into the story of our pasts, but one of us must be wrong. When we record our personal memories online we open up new possibilities for their verification but we also create different opportunities for their distortion. In subtle ways, internet firms are manipulating our digital memories all the time – and we are often dangerously unaware of it.

***

Facebook occasionally gives me a reminder of Mahmoud Tlissy, the caretaker at my former office in Libya who died quietly of pancreatic cancer in 2011 while the civil war was raging. Every so often he sends me a picture of a multicoloured heart via a free app that outlived him. Mahmoud was a kind man with a sardonic sense of humour, a deep smoker’s laugh and a fondness for recounting his wild days as a student in Prague. I am always pleased to be reminded of him, but I feel uncomfortable because I doubt he would have chosen such a naff way to communicate with me after death. Our digital lives will survive us, sending out e-hearts and populating databases long after we have dropped off the census. When we deposit our life memories online, they start to develop lives of their own.

Those who want to limit the extent to which their lives are recorded digitally are swimming against the tide. Internet firms have a commercial interest in encouraging us not only to offload more personal information online, but also to use digital technology to reflect on our lives. Take Face­book, which was developed as a means of communicating but is becoming a tool for remembering and memorialising, too. The first Facebook users, who were university students in 2004, are mostly in their thirties now. Their graduations, first jobs, first loves, marriages and first children are likely to be recorded on the site; friends who have died young are likely to be mourned on it. The website understands that nostalgia is a powerful marketing tool, and so it has released gimmicky tools, such as automated videos, to help people “look back”.

These new online forms of remembrance are becoming popular. On Instagram and Twitter it is common for users to post sentimental old snaps under the hashtag #tbt, which stands for “Throwback Thursday”. Every day, seven million people check Timehop, an app that says it “helps you see the best moments of your past” by showing you old tweets, photos and online messages. Such tools are presented as a way of enriching our ability to relive the past but they are limiting. We can use them to tell stories about our lives, but the pace and structure of the narrative is defined for us. Remembering is an imaginative act, but internet firms are selling nostalgia by algorithm – and we’re buying it.

At their most ambitious, tech companies are offering the possibility of objective and complete insight into our pasts. In the future, “digital memories” could “[enhance] personal reflection in much the same way as the internet has aided scientific investigations”, the computer scientists Gordon Bell and Jim Gemmell wrote in the magazine Scientific American in 2006. The assumption is that our complex, emotional autobiographic memories can be captured as data to be ordered, quantified and analysed – and that computer programs could make better sense of them than our own, flawed brains. The pair have been collaborating on a Microsoft Research “life-logging” project since 2001, in which Bell logs everything he has said, written, seen and heard into a specially designed database.

Bell understood that the greatest challenge is finding a way to make digital archives usable. Without a program to help us extract information, digital memories are virtually useless: imagine trying to retrieve a telephone number from a month’s worth of continuous video footage. In our increasingly life-logged futures, we will all depend on powerful computer programs to index, analyse, repackage and retrieve our digital memories for us. The act of remembering will become automated. We will no longer make our “own fictions”.

This might sound like a distant sci-fi fantasy, but we are a long way there. Billions of people share their news and views by email or on social media daily, and unwittingly leave digital trails as they browse the web. The use of tracking devices to measure and record sleep, diet, exercise patterns, health and even mood is increasing. In the future, these comprehensive databases could prove very useful. When you go to the doctor, you might be able to provide details of your precise diet, exercise and sleep patterns. When a relationship breaks down you could be left with many gigabytes of digital memory to explore and make sense of. Did you really ­always put him down? Should you have broken up four years ago? In a few years’ time there could be an app for that.

Our reliance on digital memories is self-perpetuating: the more we depend on computer memories to provide us with detailed personal data, the more inadequate our own minds seem. Yet the fallibility of the human memory isn’t a design flaw, it is one of its best features. Recently, I typed the name of an ex-boyfriend into my Gmail search bar. This wasn’t like opening a box of old letters. For a start, I could access both sides of our email correspondence. Second, I could browse dozens of G-chats, instant messaging conversations so mundane and spontaneous that reading them can feel more like eavesdropping on a former self, or a stranger. The messages surprised me. I had remembered the relationship as short-lived and volatile but lacking any depth of feeling. So why had I sent those long, late-night emails? And what could explain his shorter, no less dramatic replies, “Will u ever speak to me again? You will ignore this I suspect but I love you.” Did he love me? Was I really so hurt? I barely recognise myself as the author of my messages; the feelings seem to belong to someone else.

My digital archives will offer a very different narrative from the half-truths and lies I tell myself, but I am more at home with my fictions. The “me” at the centre of my own memories is constantly evolving, but my digital identity is frozen in time. I feel a different person now; my computer suggests otherwise. Practically, this can pose problems (many of us are in possession of teenage social media posts we hope will never be made public) and psychologically it matters, too. To a greater or lesser extent, we all want to shed our former selves – but digital memories keep us firmly connected to our past. Forgetting and misremembering is a source of freedom: the freedom to reinvent oneself, to move on, to rewrite our stories. It means that old wounds need not hurt for ever, that love can be allowed to fade, that people can change.

With every passing year, we are shackling ourselves more tightly to our digital legacies, and relying more heavily on computer programs to narrate our personal histories for us. It is becoming ever harder to escape the past, or remake the future. Years from now, our digitally enhanced memories could allow us to have near-perfect recall, but who would want to live with their head in the cloud?

Sophie McBain is an NS contributing writer. This article was a runner-up in the 2015 Bodley Head FT Essay Prize

Sophie McBain is a freelance writer based in Cairo. She was previously an assistant editor at the New Statesman.

This article first appeared in the 18 February 2016 issue of the New Statesman, A storm is coming

Show Hide image

Tweeting terror: what social media reveals about how we respond to tragedy

From sharing graphic images to posting a selfie, what compels online behaviours that can often outwardly seem improper?

Why did they post that? Why did they share a traumatising image? Why did they tell a joke? Why are they making this about themselves? Did they… just post a selfie? Why are they spreading fake news?

These are questions social media users almost inevitably ask themselves in the immediate aftermath of a tragedy such as Wednesday’s Westminster attack. Yet we ask not because of genuine curiosity, but out of shock and judgement provoked by what we see as the wrong way to respond online. But these are still questions worth answering. What drives the behaviours we see time and again on social media in the wake of a disaster?

The fake image

“I really didn't think it was going to become a big deal,” says Dr Ranj Singh. “I shared it just because I thought it was very pertinent, I didn't expect it to be picked up by so many people.”

Singh was one of the first people to share a fake Tube sign on Twitter that was later read out in Parliament and on BBC Radio 4. The TfL sign – a board in stations which normally provides service information but can often feature an inspiring quote – read: “All terrorists are politely reminded that THIS IS LONDON and whatever you do to us we will drink tea and jolly well carry on thank you.”

Singh found it on the Facebook page of a man called John (who later explained to me why he created the fake image) and posted it on his own Twitter account, which has over 40,000 followers. After it went viral, many began pointing out that the sign was faked.

“At a time like this is it really helpful to point out that its fake?” asks Singh – who believes it is the message, not the medium, that matters most. “The sentiment is real and that's what's important.”

Singh tells me that he first shared the sign because he found it to be profound and was then pleased with the initial “sense of solidarity” that the first retweets brought. “I don't think you can fact-check sentiments,” he says, explaining why he didn’t delete the tweet.

Dr Grainne Kirwan, a cyberpsychology lecturer and author, explains that much of the behaviour we see on social media in the aftermath of an attack can be explained by this desire for solidarity. “It is part of a mechanism called social processing,” she says. “By discussing a sudden event of such negative impact it helps the individual to come to terms with it… When shocked, scared, horrified, or appalled by an event we search for evidence that others have similar reactions so that our response is validated.”

The selfies and the self-involved

Yet often, the most maligned social media behaviour in these situations seems less about solidarity and more about selfishness. Why did YouTuber Jack Jones post a since-deleted selfie with the words “The outmost [sic] respect to our public services”? Why did your friend, who works nowhere near Westminster, mark themselves as “Safe” using Facebook’s Safety Check feature? Why did New Statesman writer Laurie Penny say in a tweet that her “atheist prayers” were with the victims?

“It was the thought of a moment, and not a considered statement,” says Penny. The rushed nature of social media posts during times of crisis can often lead to misunderstandings. “My atheism is not a political statement, or something I'm particularly proud of, it just is.”

Penny received backlash on the site for her tweet, with one user gaining 836 likes on a tweet that read: “No need to shout 'I'm an atheist!' while trying to offer solidarity”. She explains that she posted her tweet due to the “nonsensical” belief that holding others in her heart makes a difference at tragic times, and was “shocked” when people became angry at her.

“I was shouted at for making it all about me, which is hard to avoid at the best of times on your own Twitter feed,” she says. “Over the years I've learned that 'making it about you' and 'attention seeking' are familiar accusations for any woman who has any sort of public profile – the problem seems to be not with what we do but with who we are.”

Penny raises a valid point that social media is inherently self-involved, and Dr Kirwan explains that in emotionally-charged situations it is easy to say things that are unclear, or can in hindsight seem callous or insincere.

“Our online society may make it feel like we need to show a response to events quickly to demonstrate solidarity or disdain for the individuals or parties directly involved in the incident, and so we put into writing and make publicly available something which we wrote in haste and without full knowledge of the circumstances.”

The joke

Arguably the most condemned behaviour in the aftermath of a tragedy is the sharing of an ill-timed joke. Julia Fraustino, a research affiliate at the National Consortium for the Study of Terrorism and Responses to Terrorism (START), reflects on this often seemingly inexplicable behaviour. “There’s research dating back to the US 9/11 terror attacks that shows lower rates of disaster-related depression and anxiety for people who evoke positive emotions before, during and after tragic events,” she says, stating that humour can be a coping mechanism.

“The offensiveness or appropriateness of humor seems, at least in part, to be tied to people’s perceived severity of the crisis,” she adds. “An analysis of tweets during a health pandemic showed that humorous posts rose and fell along with the seriousness of the situation, with more perceived seriousness resulting in fewer humour-based posts.”

The silence

If you can’t say anything nice, why say anything at all? Bambi's best friend Thumper's quote might be behind the silence we see from some social media users. Rather than simply being uncaring, there are factors which can predict whether someone will be active or passive on social media after a disaster, notes Fraustino.

“A couple of areas that factor into whether a person will post on social media during a disaster are issue-involvement and self-involvement,” she says. “When people perceive that the disaster is important and they believe they can or should do something about it, they may be more likely to share others’ posts or create their own content. Combine issue-involvement with self-involvement, which in this context refers to a desire for self-confirmation such as through gaining attention by being perceived as a story pioneer or thought leader, and the likelihood goes up that this person will create or curate disaster-related content on social media.”

“I just don’t like to make it about me,” one anonymous social media user tells me when asked why he doesn’t post anything himself – but instead shares or retweets posts – during disasters. “I feel like people just want likes and retweets and aren’t really being sincere, and I would hate to do that. Instead I just share stuff from important people, or stuff that needs to be said – like reminders not to share graphic images.”

The graphic image

The sharing of graphic and explicit images is often widely condemned, as many see this as both pointless and potentially psychologically damaging. After the attack, BBC Newsbeat collated tens of tweets by people angry that passersby took pictures instead of helping, with multiple users branding it “absolutely disgusting”.

Dr Kirwan explains that those near the scene may feel a “social responsibility” to share their knowledge, particularly in situations where there is a fear of media bias. It is also important to remember that shock and panic can make us behave differently than we normally would.

Yet the reason this behaviour often jars is because we all know what motivates most of us to post on social media: attention. It is well-documented that Likes and Shares give us a psychological boost, so it is hard to feel that this disappears in tragic circumstances. If we imagine someone is somehow “profiting” from posting traumatic images, this can inspire disgust. Fraustino even notes that posts with an image are significantly more likely to be clicked on, liked, or shared.

Yet, as Dr Kiwarn explains, Likes don’t simply make us happy on such occasions, they actually make us feel less alone. “In situations where people are sharing terrible information we may still appreciate likes, retweets, [and] shares as it helps to reinforce and validate our beliefs and position on the situation,” she says. “It tells us that others feel the same way, and so it is okay for us to feel this way.”

Fraustino also argues that these posts can be valuable, as they “can break through the noise and clutter and grab attention” and thereby bring awareness to a disaster issue. “As positive effects, emotion-evoking images can potentially increase empathy and motivation to contribute to relief efforts.”

The judgement

The common thread isn’t simply the accusation that such social media behaviours are “insensitive”, it is that there is an abundance of people ready to point the finger and criticise others, even – and especially – at a time when they should focus on their own grief. VICE writer Joel Golby sarcastically summed it up best in a single tweet: “please look out for my essay, 'Why Everyone's Reaction to the News is Imperfect (But My Own)', filed just now up this afternoon”.

“When already emotional other users see something which they don't perceive as quite right, they may use that opportunity to vent anger or frustration,” says Dr Kirwan, explaining that we are especially quick to judge the posts of people we don’t personally know. “We can be very quick to form opinions of others using very little information, and if our only information about a person is a post which we feel is inappropriate we will tend to form a stereotyped opinion of this individual as holding negative personality traits.

“This stereotype makes it easier to target them with hateful speech. When strong emotions are present, we frequently neglect to consider if we may have misinterpreted the content, or if the person's apparently negative tone was intentional or not.”

Fraustino agrees that people are attempting to reduce their own uncertainty or anxiety when assigning blame. “In a terror attack setting where emotions are high, uncertainty is high, and anxiety is high, blaming or scapegoating can relieve some of those negative emotions for some people.”

Amelia Tait is a technology and digital culture writer at the New Statesman.