Hard Evidence: Is the teenage brain wired for addiction?

The younger you are when you have your first alcoholic drink, the more likely you are to develop problems later on in life.

As a nation, we are drinking much more than we used to, which is partly attributable to alcohol being cheaper and more available than ever. Many British teenagers get into the habit early, although recent trends suggest this situation is improving (alcohol consumption among teenagers is slightly lower than it was ten years ago).

Nonetheless, drinking alcohol during adolescence is not a good idea, because the younger you are when you have your first alcoholic drink, the more likely you are to develop problems later on in life. The same is true for cigarette smoking and the use of illicit drugs such as cannabis and cocaine.

Rates of teenage drinking are dropping. NatCen


Arrested development

Why are adolescents particularly vulnerable to addiction? A large part of the answer comes from our understanding of the neurobiology of brain development during adolescence. The brain does not reach maturity until fairly late in life, with new connections between brain cells being formed right up until people are in their mid-20s.

Importantly, the brain does not mature at a uniform rate. The more primitive regions of the brain, including the reward system and other areas of the subcortex such as those parts that process emotions, reach maturity relatively early (when people are in their early teens).

The prefrontal cortex is a late bloomer. National Institute of Health

The more “advanced” parts of the brain, such as the prefrontal cortex, are not fully developed until much later. In behavioural terms this means adolescents are particularly sensitive to their emotions and to things that are novel and motivationally appealing, but they are relatively unable to control their behaviour and plan for the future.

Taking risks

My research suggests this can explain why some adolescents drink more than others: teenagers who were relatively poor at exerting self-control, or who took more risks on a computer test of risk-taking, were more likely to drink heavily in the future.

This creates perfect conditions for vulnerability to addiction during adolescence, because the motivational “pull” of alcohol and other drugs is very strong, whereas the ability to control behaviour is relatively weak. Many scientists think if adolescents do drink a lot, and if they do it frequently, then this might cause long-lasting changes in the way that the brain is organised, which can make it very difficult to stop drinking.

We certainly see changes in the brains of people with alcohol problems (compared to people without problems), but it can be difficult to work out if alcohol caused those brain changes, or if those people had slightly different brains before they started drinking, and these subtle differences may have led them to start drinking in the first place.

Starting early carries greater risk. NatCen


Addiction and behaviour

In principle, adolescent brains could be vulnerable to “behavioural” addictions as well as alcohol and drug addiction, for exactly the same reason. Very few behavioural addictions are officially recognised by psychiatrists and psychologists at the moment (gambling addiction is the only exception).

The Channel 4 documentary Porn on the Brain shown this week asked whether pornography is addictive, and if adolescents could be getting hooked. As shown in the programme, it certainly seems to be the case that a minority of adolescents who use pornography exhibit some of the characteristic features of addiction, such as feeling unable to control their use of porn, and loss of interest in other activities.

Their patterns of brain activity when viewing porn seem to be similar to those seen in people with alcohol and drug addictions when they look at pictures of alcohol and other drugs. It remains to be seen whether addiction to porn will eventually be recognised as a psychological disorder, but it is clear that it can create problems for some adolescents and young adults who use it.

What can be done? Although it’s obvious, parents should do what they can to prevent their children from experimenting with alcohol, smoking and other drugs for as long as possible. The same applies to other things that might eventually be considered “addictive”. School-based prevention programmes can also be successful, including a recent program that is tailored to different personality types and has shown some promise at reducing alcohol consumption in teenagers.

Hard Evidence is a series of articles in which academics use research evidence to tackle the trickiest public policy questions.

Matt Field receives funding from the Medical Research Council, Economic and Social Research Council, Wellcome Trust, British Academy and Alcohol Research UK. He is affiliated with the UK Centre for Tobacco and Alcohol Studies.

This article was originally published at The Conversation. Read the original article.

Teenages making a toast in a pub. Photo: Getty

Matt Field is Professor of Experimental Addiction Research at the University of Liverpool.

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.