Show Hide image

Hacking the heart: the psychology of scams

Online deception can be a threat to people’s mental and physical health, warns Professor Monica Whitty, cyberpsychologist at the University of Warwick.

 

Con artists have been scamming victims for centuries. However, because the internet allows criminals to target many more victims, in the last 10 years we have witnessed scams on a global scale. In the UK in 2016, it was reported in the National Crime Survey that citizens are 10 times more likely to be robbed while at their computer by a criminal based overseas than to fall victim of physical theft (Office for National Statistics, 2016). In my work in the Cyber Security Centre at the University of Warwick, WMG, I have been leading inter-disciplinary projects that attempt to understand the psychology of scams and find effective methods to detect and prevent them. In particular, we have focused on mass-marketing frauds (MMFs).

Not all readers will be familiar with the term MMF; however, most would have encountered at least one of these in their lifetime. MMF is a serious, complex and organised crime. Examples include foreign lotteries and sweepstakes (in which the victim believes they have won money from a lottery and are told to pay a fee in order to release the funds); ‘419’ scams (advance-fee fraud, in which victims believe that for a small amount of money they will make a large fortune); and romance scams (taken in by a fake online dating persona, in which the victim sends the fake persona money). Some MMFs are low-value, one-off scams on large numbers of victims, whilst others involve developing a relationship (e.g romantic, business, friendship) where money is defrauded over time, again with simultaneous or sequential victims.

Victims of MMF suffer both financial losses and psychological impacts; with the latter sometimes outweighing the former, even when large sums of money are lost. One of our motivations to investigate this particular cyber crime is the severity of this psychological harm – in some cases victims have been known to commit suicide. Common reactions to being scammed include shame, guilt, embarrassment, depression, grief, anxiety and loss of trust.

Catching and prosecuting MMF criminals is difficult, for three main reasons. Firstly, the criminals often live in a different country to the victims. Secondly, the methods they use make them difficult to trace, and thirdly, prosecution is very time-consuming, owing to the large amounts of online data that need to be analysed to establish evidence.

Although disruption tactics are important, we have taken a more victim-oriented approach to protect users from MMF. Our work has involved interviewing victims of MMF to gain a greater understanding of why they believed they were tricked and persuaded to give money to fraudsters as well as to map out the anatomy of these scams.

In my work, I have argued that criminals are able to exploit the media they are communicating within to develop hyper-personal relationships with victims (especially victims of a romance scam). Communicating in online spaces can potentially isolate victims from friends and family to allow the criminal to become the dominant person in the victim’s life. A synchronous and long-distance communication in the form of emails, texts and instant messenger allows criminals to be very strategic in the stories they create and the messages they send, creating the perfect online lover. In fact, many of the victims of romance scams that I have spoken to find it difficult to delete messages and photographs sent by the criminal, even after it has been revealed to them that they have been deceived.

We have also researched the victimology of different types of scams, considering demographic as well as psychological factors. Our research is finding that different types of people are susceptible to different types of scams. Many romance scam victims, for instance, have been found to be middle-aged, educated women who score highly on psychological measures such as impulsivity, addictive disposition and trustworthiness.

One of the novel methods we are currently researching to detect and prevent MMF involves a team of computer scientists: Professor Rashid, Dr Stringhini, an expert in human-computer interaction; Professor Sasse, a criminologist; Professor Levi; and a philosopher, Professor Sorell. The work we are undertaking involves developing a proof-of-concept automated agent to identify communication with a potential scammer and hoping to do so prior to the ‘sting’ taking place. The agent will need to make decisions about the probability of a victim communicating with a scammer by drawing upon their personal data.

One of the challenges in our research is the human element. MMFs, unlike phishing or even spear phishing, are especially a challenge to detect because they typically involve communication with another person, rather than a bot – this means that scripts can vary and are more complex. Often the criminal is developing a relationship that appears authentic to the users (romantic, friendship, working relationship) over a long-period of time prior to asking for money, and they can vary the communication when a user demonstrates a lack of trust. They can also use multiple media channels to communicate with the user.

The research being undertaken in our project is drawing from psychology, media and communications, criminology and linguistics to help identify deception and persuasive communication, and evidence of the “grooming” often found in MMFs. We are also interested in identifying the online identities, other communication and online behaviours typical of scammers as well as victims. By examining socio-technical features such as the use of the same profile photographs, descriptions across multiple profiles and patterns of interaction and contact with other users (e.g. login times), we can help to spot MMF earlier.

Importantly, we will be considering the ethical and social challenges associated with detecting and preventing MMF. For example, questioning the ethics of drawing in personal data from genuine and disingenuous people to assist in decisions regarding the identity and authenticity of another user. Moreover, we are interested in considering the ethics involved in how we ought to treat victims who cross the line and knowingly become “money mules” in order to recoup their losses. Should they also be treated as criminals?

As we produce papers we will present our latest findings on the DAPM website, and we are looking for volunteers to help us with our research. If we’re successful, we hope prevent some of the most damaging and upsetting cyber crime.

Show Hide image

Inside the National Cyber Security Centre

The new chief executive of the National Cyber Security Centre, Ciaran Martin, and other senior members of NCSC staff give their take on a more open, more outgoing arm of GCHQ.

The GCHQ base in Cheltenham is a building the size of Wembley stadium, bristling with security cameras, patrolled by armed guards and surrounded by tall fences that are topped with razor wire. The organisation’s new London headquarters, however – the National Cyber Security Centre – occupies two floors of a glass-walled office building in Victoria. It’s a very smart, new office building, but there is a distinct lack of razor wire, and none of the receptionists appear to be carrying automatic weapons.

The NCSC’s open environment is illustrative of its approach, particularly where businesses are concerned. While much of its operational work will remain classified, the NCSC will invite people from the private sector to train within its walls. Following an official opening by the Queen, Philip Hammond delivers a speech in which the digital economy is mentioned before national security, and in more detail.

“The private sector is piling in extensively here today,” agrees Ciaran Martin, the NCSC’s chief executive. “We’re getting 100 private sector people in to work here,” he adds, referring to the Industry 100 initiative, which will “embed” 100 workers from across the private sector in the NCSC to share expertise. “It’s not one of those areas where the private sector is telling the government to back off – they’re asking to work with us, and we’ve got plenty to learn from them.”

The NCSC will also be heavily involved in securing the public sector, too, helping to co-ordinate cyber defences across bodies from the MoD to the smallest local council. “Local government is a major concern for the NCSC,” says Martin, “but let me be nice to local government. They are under significant financial pressure, they’ve got all sorts of obligations, and this can be quite complex stuff. There are 380-odd local authorities in Great Britain. Some of them, like Birmingham, are the size of decent-sized companies, and some of them are very small. If you’re a small local authority, I think that in the past, organisations like mine have been slightly too lecturing towards you about what you’re not doing right, and not sympathetic enough to the fact that if you’re trying run, for example, a small rural local authority, you’ve got lots of citizen data but you’ve got lots of other responsibilities, and it’s quite hard to get the right people and the right tools in place. It’s quite hard to even know where you can look for help.”

Martin aims to change that by introducing simple, effective tools that will help public bodies of all sizes secure themselves. “One of the things that we’re proudest of, which we’ll be rolling out later this year – and which has been exhibited in front of the Queen today – is WebCHeck. What WebCHeck does is, it scans websites for vulnerabilities and it says “here’s where you’re good, here’s where you’re bad, here’s where your certificates are out of date.” It gives you a report that’s automatically generated, and it tells you how to fix it. We’re giving that to local government for free.”

These NCSC-developed tools will also become available to small businesses, too. The centre recently built a tool to eliminate spoof emails that appeared to be from HMRC; “The code that we used to stop HMRC spoofing, we’re making freely available today. That means that if you run a small business with an internet domain address, you can work out who, if anybody, is spoofing you and what you might be able to do to thwart them. We’re trying to do things that make it that little bit simpler for people who may not have the resources and time of a larger government or private sector organisation, just to make it a little bit easier to take sensible, risk-based decisions and make the improvements that will help. Because every little helps, in cyberspace – if you raise the bar a little bit, attackers can go elsewhere.”

The NCSC’s technical director, Dr Ian Levy, says blunt instruments are still too effective in cyberspace. “It’s important to differentiate the sophistication of the attack with the level of the impact. The two are not correlated; you can have a really, really simple attack that causes a lot of national impact. Take TalkTalk as an example – a very, very simple attack had a huge effect across a large number of people. Whether it should have done is another discussion, but it did. It changed the public consciousness; a lot of the very sophisticated attacks don’t have that same sort of impact on a large number of people. Some of them are not about disclosing large amounts of personal data, or stealing, or making money – they’re about traditional statecraft, and that has a much lower impact on your average population. It can have a national security impact, but one of the things we need to change the narrative of is the difference between the sophistication of an attack and the impact of that attack.”

State-level attacks

While much of the NCSC’s work will be in making the UK a “hard target”, as Martin describes it, for cybercriminals of all kinds, the centre remains a part of GCHQ. Its work will also encompass the new possibilities digital technology has opened up for espionage, diplomacy and war. At the centre of one of the exhibits shown to the Queen and other visitors on the opening day is a grey box, about the size of a biscuit tin, a few lights blinking on its front. Easily ignored by the passing dignitaries, this box is of particular significance in security circles. It is a programmable logic controller, or PLC. These controllers are found everywhere moving parts need to be automated and controlled – in factories, power stations, aeroplanes, trains, and automatic doors. In 2010, a mysterious and highly sophisticated piece of malware appeared that targeted one specific model of PLC, in a very specific configuration, and caused it to malfunction, causing serious damage to the equipment it controlled. The equipment it targeted was later identified as the enrichment technology used in the Iranian nuclear programme.

The display also contains a laptop. Tap a button, execute a command through the malware on it, and a light on the PLC changes from green to amber. In December 2015, an unknown hacker tapped just such a button. Moments later, the lights in 230,000 Ukrainian homes went off.

A member of NCSC staff who declined to be named said that his greatest worry with regard to this type of attack was that it could be used on the gas grid. “If the gas network was depressurised,” he told me, “it could take up to a year to get it back.” These are the more worrying scenarios the NCSC must imagine and plan for; a winter without central heating would bring the NHS to its knees, at the very least.

Jacqui Chard, the NCSC’s Deputy Director for Defence and National Security, says that a national security level cyber incident could take many forms. “It’s about the impact across government or across citizens,” she explains, adding that at the most serious level, the NCSC helps to plan against and prevent attacks that would cause “serious damage, loss or disruption of critical services or systems for the nation – which could be critical national infrastructure, the parliamentary system, defence, our finance institutions, or our transport system.”

“From a defence point of view,” Chard says, the most serious type of cyberattack would be one that looks like an enemy preparing the battlefield, that “impacts on the strategic planning for our military forces. Or, if we were subject to attacks on our soil, how we’re going to co-ordinate – so, if communications between government at the highest level were affected. That’s where we’re focusing for the biggest risks for the country at the moment.”

While attacks of this type are fortunately still mostly theoretical, it does look increasingly as if cyberweapons are capable of causing loss of life on a similar scale to the kinds of weapons that are bound by international treaties. Steps in this direction were taken in 2015, when the Chinese government agreed with the US and UK “not to conduct or support cyber-enabled theft of intellectual property, trade secrets or confidential business information” (in the words of the China-UK statement). Asked why she thinks this statement did not include a statement on national security, Chard replies that “The business agreements that we’ve made are a matter of national security. They’re for our prosperity as a country, so we absolutely see those as part of that.”

The new diplomacy

With the growing power of cyberattacks to cause devastating consequences across borders comes the thorny issue of determining where an attack has originated, who ordered it, and if a government was involved. It is likely that the difficulty of attribution will have profound effects on diplomacy in the future, and a key role for the NCSC will be to provide evidence of the involvement of other nation states.

Both Ciaran Martin and Michael Fallon have spoken publicly about a “step change” in Russian cyber aggression, but Martin says certainty is still hard to come by. “Attribution can be very difficult, and a lot of the detection work on state attacks is in the classified area of where we work, even though we work a lot in the open. But in general terms, in my three years of looking at these [incidents], sometimes you have direct evidence of named individuals with pictures, and sometimes you have very little clue as to even what country an attack might be coming from.” Furthermore, “attacks could be coming from within a particular country, but that’s not necessarily the same thing as being sponsored by that country, or even tolerated by the government of that country.”

What makes international relations even more complex is that increasingly, and especially with regard to Russia, technology allows other “actors” to expose secrets and disseminate lies at scale. This is particularly effective when it comes to elections. The extent to which Russia may have been able to influence the US presidential election is the subject of furious debate, but the UK’s political system is not immune to intervention either. Last year, GCHQ revealed that it had tracked and thwarted what Martin calls “activity” with regard to Whitehall servers. “There was activity we noticed,” he says, “because we notice activity all the time, that was in and around institutions that may or may not be related to the possibility of an attack on the election.”

Governments and political parties are going to have to recognise the threat this “activity” represents. Martin says no formal requests have been made by specific parties for help, but that he expects these requests to be made.

Ultimately, he advises that to safeguard British politics, “you need to look at the system as a whole, all the way through from government institutions to parliament, to institutions that are influential in political life, like the media, like think tanks – way beyond political parties, even to high-profile individuals whose views are of interest. It’s about the totality of that. So we’ll publish data and recommendations about how to mitigate these sorts of attacks, and we’ll look at the most aggressive actors and try to find out what they’re targeting. That’s probably better than trying to predict the precise route of attack on the British political system.”

Will Dunn is the New Statesman's Special Projects Editor. 

0800 7318496