Photo: Getty
Show Hide image

Why is Google working with the Pentagon?

Given its “Do No Evil” motto, Google’s decision to work with the world’s most robust military agency may seem surprising.

The tech giants have faced unprecedented scrutiny lately, with commissions, boards and committees around the world decrying the effect that they have had on society, from influencing elections to facilitating disinformation. But one area where their algorithms have the potential for huge impact has until recently flown mostly under the radar. 

Google’s involvement in the Pentagon’s Algorithmic Warfare Cross-Functional Team, known as Project Maven, was leaked this week to Gizmodo from current Google employees. Google provided what are called Tensorflow APIs (application programming interfaces) which enable computers to communicate with each other.

These systems enable the military to run through the hordes of footage collected by drones, identify an object of interest and then flag it for a trained human analyst to review. Google has stated that the partnership is a pilot and that its tools are solely for non-offensive use, though that seems difficult to prove. Reports indicate that it has already been used in operations against the Islamic State since December 2017.

While the news has generated controversy because it is rare for this new breed consumer tech companies to collaborate directly with the military, the armed forces haveh always played an important role in developing novel technologies, even for civilian capacities. The internet itself was of course the result of a research project at DARPA, which is part of the US department of defence, in order to give military personnel a way to communicate without needing a phone.

But times have changed, leaving govrnmental agencies scrambling to keep up. Ulrikke Franke, an expert on drones and the military at the University of Oxford, points out that, “cutting-edge research on new technologies is no longer done primarily in the military, in the US through DARPA, but at Google and the like.” This is for a variety of reasons, but primarily because the last few decades have seen an acceleration in technical innovation that a company free of bureaucratic restraints – and without national security issues to consider – can pursue and develop far better than a governmental agency. 

"One of the key issues with UAVs is that they produce a lot more data than human beings can assess, given the resource constraints that militaries and intelligence agencies have to operate with," says Dr.Jack McDonald, a lecturer in War Studies at King's College London who specialises in the relationship between the ethics of war and new technology. "Artificial intelligence, or rather knowledge discovery by computer systems, could enable the same number of human beings to leverage these volumes of data."

 In 2017, the Pentagon invested $7.4bn directly into developing artificial intelligence and machine learning capacities. A memo from the Department of Defense, forming Project Maven, highlights that the integration of machine learning and artificial intelligence is crucial for the Department of Defense to maintain "advantages over increasingly capable adversaries and competitors".  

Given Google's pioneering work in the field, as well as the links that imany of the executive board members of its parent company, Alphabet, have with the defense community; Google is technically in the perfect position to inform the development of this specific program. 

Executives at both Google and at Google's artificial intelligence arm, DeepMind, were among the first to speak about setting up ethical advisory boards to investigate the social implications of artificial intelligence when it is used in society. These are meant to stave off the danger of algorithms wreaking havoc on society with little impunity.

Within the US in particular, there is a long history of private companies, such as Lockheed Martin, carrying out contracted work for military agencies. Government contracts are usually lucrative and long term, and arguably provide an opportunity for a company like Google to have a greater role in developing artificial intelligence for military use independent of other companies, which would be a sufficient incentive. 

It could be argued that tech companies will inevitably have to find a place in the military ecosystem, given how many of the new kinds of battles being waged are digital. As Dr.McDonald points out, "computer systems and technologies associated with AI are fundamentally dual-use - they could easily be repurposed for military use." 

Others argue that if Google is able to refine the surveillance tools that the military will inevitably use, then it has a moral responsibility to be involved, given that it can’t fundamentally alter the course of US foreign policy, but can reduce the number of unnecessary casualties.

And yet the Google employees leaked the internal emails to Gizmodo because they were outraged that their work would be used for surveillance technologies. For a company with “Do No Evil” as one of its mottos, Google’s decision to work with the world’s most robust military agency should be surprising, even to the people who work there.

Google's previous track record also shows a reluctance to be involved with military projects, even indireclty. When Google acquired Skybox, a satellite-imaging program in 2014, it ended some of Skybox’s military contracts, and its robotics teams have not entered into competitions run by the Pentagon, in contrast to many other companies with similar (or even lesser) capabilities. Given its previous stance, Google's choice to work in such an unregulated field at the intersection of developing technologies – both drones and artificial intelligence – seems like a change of course.

Artificial intelligence poses significant challenges outside of a warzone, let alone in combination with weapons. It could remove accountability for operations, which could make the death of civilians no more than a computer error. The US government's deployment of drones has been incredibly flawed, but this is unlikely to change with simply more accurate identification of "objects". And if those "objects" are terrorists, that creates a huge range of other issues, such as what data you train it on or how you even classify a terrorist, as detailed in this Ars Technica article

Given what’s at stake, Google’s involvement in Project Maven – and by extension, in the military industrial complex – adds a particularly thorny extra dimension to those debates raging about what role technology and the firms that create it play in our society.

Jake Paul via YouTube
Show Hide image

We should overcome our instinct to mock Jake Paul’s school shooting video

The urge to mock the ex-Disney star diminishes the victims he speaks to and ignores the good YouTubers can do.  

It’s very “darkest timeline”. Ex-Disney star Jake Paul (brother of vlogger Logan Paul, who infamously filmed the dead body of a suicide victim) has created a 22-minute documentary about the Parkland school shooting in which he greets Florida senator Marco Rubio with the words “Hey, what’s up man?” and doesn’t mention gun control once. 

Paul – who has previously made headlines for setting fire to a swimming pool – goes on to ask the politician: “I think like a lot of people think passing laws is super easy, can you explain some of the struggles around, uh, passing laws?”

It’s hard to not immediately balk at the documentary, which was released yesterday and has since been widely mocked by the press and individual journalists. Critics note that Paul doesn’t mention gun reform within the YouTube video, and many mock his conduct towards Rubio. Others accuse the video of being an insincere PR move, particularly as Paul has previously fetishised guns on his YouTube channel – and has a tattoo of a gun on his thigh.

21-year-old Jake Paul talks and conducts himself like a child, which is what makes the video immediately jarring (“I just wanna become homies with them and just be there for them,” he says of the Parkland survivors he is about to meet). There is a vacant – almost dumb – expression on his face when he speaks with Rubio, leading the viewer to question just how much the YouTube star understands. But this is precisely the value of the video. Paul is a child talking to an audience of children – and talking to them on their terms.

YouTube doesn’t disclose the exact demographics of a YouTuber’s audience, but fan videos and Paul’s comment section reveal that most of his 14 million subscribers are young children and teens. Paul is introducing these children to a politician, and the video is edited so that Rubio’s claims don’t go unchecked – with footage of the senator being criticised by Parkland survivors playing in between shots of Paul and Rubio’s chat.

Paul (admittedly unintentionally) asks the senator questions a child might ask, such as “Is there anything that people can look forward to? Is there anything new that you’re working on?”. Although this might be jarring for adults to watch, the comment section of Paul’s video reveals it is already positively affecting his young audience.

“Definitely going to speak out now,” writes one. Another: “I shared this to my Mum and asked her to show the head teacher so everyone do that as well.” Childishness is still transparently at play – one commenter writes “Plzzz Stop the Guns… it hurts my feeling I’m crying… 1 like = 10 Pray to Florida” – but this too shows that Paul has introduced new concepts to kids previously more concerned with online pranks and viral fame.

Of course, it’s easy to see how this might be a cynical move on Paul’s part. Yet how can we demand more from YouTubers and then criticise them when they deliver it? Paul’s video is far from perfect, but engaging children in genuine discussions about current affairs is a commendable move, one far superior to his prior acts. (Paul previously caused controversy by telling a fan from Kazakhstan that he “sounds like you’re just going to blow someone up”, and his diss-track “It’s Everyday Bro” is third most disliked video on YouTube). Like it or not, Paul has an incredible influence over young people – at least he is finally using it for good.

Paul’s video has also undeniably helped at least one teen. “It’s just easier to talk about what’s going on with someone like you than a doctor or someone,” Jonathan Blank – a Parkland survivor – tells the YouTuber in the video. Later, his mother praises Paul through her tears. “It was the best therapy for my son,” she says, “You didn’t have an agenda, you cared.”

Other Parkland survivors are angry at the media’s response to the video. Kyle Kashuv – also interviewed in the documentary – has tweeted multiple times since the video’s release. “Media has the utter audacity to mock my classmates and Senator Rubio for doing the interview ON MY REQUEST AND THE REQUEST OF TWO OTHER STUDENTS,” he wrote.  

“If you mock a video where my classmates, that witnessed their friends get murdered in cold blood, are crying and putting their hearts on their sleeve, be prepared to be hit back twice as hard.”

Kashuv differs from the most famous group of Parkland survivors, as the teen supports the STOP School Violence Act over national gun reform. Yet the teen’s politics do not make his thoughts or feelings less valid, or his voice less important in the conversation. While critics note Paul spoke little of gun reform in his video (instead he suggested that schools have bullet proof glass and Instagram should flag pro-gun posts), the YouTuber later tweeted to clarify his stance.

“Gun Reform changes we need in my opinion,” he wrote. Paul went on to suggest that anyone who wants to buy a gun should be 21, go through a six month training course, and have a mental health evaluation. He also tweeted that gun shows should be banned and there should be a “30 day wait period after purchase to receive firearm”.

This isn’t to say, of course, that Paul is right, or has all the answers, or is even equipped to discuss this topic sensitively. Yet his promise to pay for busses to the March for Our Lives demonstration in Washington DC, alongside the fact he didn’t monetise his YouTube documentary, speak of someone at least trying to do some good. “We all want the same thing and that’s to make schools safe,” he says in the video. Although he gives Rubio and the STOP School Violence Act a platform, he is dismissive of their impact.

“Kind of why I wanted to make this video in the first place is to activate parents and kids within their own schools and communities, that’s the way things are going to get done the fastest. We don’t to wait for hundreds of people in Washington DC to pass the laws,” he says.

Though the description to Paul’s video was most likely written by a far-more savvy PR, it’s hard to disagree with. “I vow to be part of the solution and utilise my platform to raise awareness and action across the board, but we cannot focus on one issue, we must actively discuss and make progress on them all,” it reads.

The criticism of Paul smacks of the old media sneering at the new media, galled and appalled that a 21-year-old YouTuber would dare wade into politics and do so less than perfectly. Concerns about propriety and morality are a veil to disguise a pervasive distaste for YouTube stars. Criticisms that his suggested solutions are stupid ignore the fact that it’s not his job to reform society. It’s like having a go at Sesame Street for not criticising Theresa May.

YouTubers might not be the idols that adults wish teenagers had, but we can’t change that. What we can do is encourage viral stars to engage with important issues, and not mock them when they do so less than brilliantly. Jake Paul may not be a good person – it might even be a stretch to describe the video as “good”. But the YouTuber made an effort that should be commended, not mocked. 

Amelia Tait is a technology and digital culture writer at the New Statesman.