The field of AI is changing rapidly – and George Farrer has made it his business to keep up. It isn’t easy. He spends around an hour each day browsing press releases, trawling through academic papers and scrolling Twitter, the better to sift and scrutinise the latest developments for the benefit of his colleagues in Westminster. It might not be the most glamorous aspect of his role as Head of Parliament Relations at the think tank Big Innovation Centre, but it’s central to his promotion of the technology’s potential — the bread and butter of his weekly discussions with MPs. “AI is sort of on everyone’s lips at the moment,” says Farrer, “especially within parliament.”
Farrer manages the All-Party Parliamentary Group on AI. APPGs are informal groups representing MPs’ and peers’ interests, covering everything from vaping and veterans, to the visual arts and vaccinations. Although they have to be chaired by MPs and Lords – in the case of the AI group, that’s the Rt. Hon. Stephen Metcalfe (Con) and Lord Clement-Jones (Lib-Dem) – they’re often managed or funded by private organisations and think-tanks like the Big Innovation Centre. They hold no official status within Parliament, but advocates say they’re helpful for raising attention on under-discussed issues and building cross-party campaigns for policy change — even though some transparency campaigners say they’re insufficiently well-regulated.
The debate surrounding AI is getting increasingly heated, and lawmakers are struggling to respond to constituents’ fears over job losses and so-called “existential risk.” Controversial regulations are also in the pipeline — ranging from the UK’s “light-touch” approach to the comprehensive, risk-based EU AI Act. Amid a moment of fraught political debate, Farrer is one of a growing number of young researchers and industry leaders who can be found traipsing the halls of power in Westminster and Brussels seeking to promote discussion on the mighty promises of AI — and the equally-mighty risks.
As the major interlocutors between industry and lawmakers, think-tanks like Big Innovation Centre and trade associations like techUK are among the voices working to help shape AI policymaking from behind the scenes in Westminster. European start-ups, meanwhile, have found their own specialist lobbying group in the European AI Forum (EAIF). What does this work — as the middlemen in discussions of a contentious new technology — actually involve?
In Farrer’s case, most of it revolves around arranging the APPG’s “evidence sessions”, which take place around six or seven times annually on different topics. Farrer researches relevant themes, speakers, and potential invitees and then uses the sessions to promote the group and generate key policy recommendations. “I don’t do as much within the evidence sessions [themselves],” says Farrer. “As long as I’ve briefed the Chair well and briefed the speakers well, that’s 90 minutes where I can relax a bit.”
Farrer, who manages the APPG, says he’s visited the Palace of Westminster at least twice per month for evidence sessions, seminars, and private meetings this year. “Hopefully we’ll be in there more from September,” says Farrer, predicting that political discussion of AI will grow exponentially after the summer recess. From Big Innovation Centre’s office in the City, meanwhile, he maintains weekly contact with Metcalfe and Clement-Jones, the co-chairs of the APPG.
Farrer tends to see a lot of the same faces at the group’s events, but says that, despite long-standing stereotypes of MPs’ techophobia, broader interest in emerging technologies is growing rapidly in Westminster — especially among newer MPs keen to make their mark. Big Innovation Centre recently organised an introductory seminar titled ‘What is AI?’, intended to introduce the subject to fresh-faced parliamentarians outside of the industry-specific APPG. MPs elected in 2019, such as Dr Luke Evans and Angela Richardson, were particularly notable among the twenty-or-so attendees of the session, says Farrer.
It’s been a lot easier to get people to pay attention to AI ever since the rise of ChatGPT, says Farrer, who’s been working in the field since 2021. “Parliamentarians are getting more engaged,” he says. They need to know about AI — not only because their constituents are asking about it, but also because it might play a role in their own political fates, especially amid mounting fears over deepfakes and misinformation potentially undermining democratic elections.
Like Farrer, Katherine Holden spends a lot of time in meetings. As Head of Data Analytics, AI and Digital ID at digital technology trade association techUK, Holden says her schedule has been particularly hectic since the dawn of ChatGPT last November. “AI definitely feels like it’s having its moment right now,” she says. “A lot of our members — but also the public as well — are quite overwhelmed.”
Her workdays involve everything from organising industry working groups to discussing social media strategy and emailing MPs and policymakers. Holden’s outreach has particularly targeted the newly-minted Department for Science, Innovation and Technology (DSIT), a flagship initiative of Rishi Sunak’s premiership. DSIT has faced some teething problems and scepticism, but it’s also earned big fans in the tech industry, including Holden, who says the department’s policymakers have been very open to discussion with industry representatives on AI. “It’s been really refreshing,” she says. “It’s quite a new way of policymaking: iterating and kind of creating policy in the open and recognising they may not have all the answers yet.”
For lobbyists and trade leaders, keeping up with the rapid pace of AI development is a mammoth task in itself, explains Holden. “I recognise now that you can’t always keep on top of absolutely everything,” she says. For her part, Holden makes a point of spending much of her free-time monitoring the conversation about the conversation on AI, whether that’s in the news or social media discourse or on podcasts like Nina Schick’s “Pioneers.” “Sometimes actually what people are saying around a story is just as interesting as the story itself,” says Holden.
Similar conversations are happening across the Channel. Policymakers in the EU have been agog at recent developments taking place across the AI field, events that threaten to overtake the nascent EU AI Act and keep German-born lobbyist Daniel Abbou busy in Brussels. “I know the best places to buy […] French fries — or Belgian fries — and waffles,” he says, not a little conspiratorially.
While big tech companies notoriously shell out huge sums for lobbyists and trade associations, smaller European startups sometimes struggle to make their voices heard in Brussels, explains Abbou. He’s the manager of both the German AI Association and the European AI Forum (EAIF), which brings together nine national industry groups from across Europe — aiming to give a platform in Brussels to more than 2,000 start-ups and scale-ups.
Beyond the local snacks, lobbying in Brussels is pretty mundane, says Abbou. It’s mainly about writing position papers and, most crucially, angling for one-on-one conversations with representatives from the European Parliament and Commission. “It’s like an appointment for a normal meeting,” he says. “You talk to them, say: Hey, I’m in Brussels. Can we talk about X, Y, Z? Then some of them say ‘No’ and some say, ‘Yeah! Come and have a coffee.’”
Abbou says his greatest weapon for securing engagement from European parliamentarians is championing homegrown European businesses. That can help him stand out among better-funded international representatives. “Our USP is to say […] we have a European agenda,” says Abbou. That helps them win a hearing — and respect — from those in the European Parliament and Council of Europe.
The debate around AI heated up in Brussels in anticipation of the EU’s draft AI Act — the world’s first comprehensive artificial intelligence legislation — which some fear could stunt innovation in the continent, says Abbou. The legislation, which takes a largely risk-based approach to AI, will have sizable compliance consequences for developers and start-ups across the EU. “What triggers me on the EU AI Act,” says Abbou, is that leading pioneers like Sam Altman are now supporting tight regulations that they didn’t have to face themselves. “Yeah, because they could do their freaking models without any regulations! […] They have their advantage, which European companies can’t get up to because they have to start under the regulation.”
Nevertheless, education remains a central component of tech lobbyists’ and think-tanks’ work. “In politics, you see very few people with a background in technology […] so the first step is to engage politicians and policymakers in what is really happening,” says Kees van der Klauw, manager of the Netherlands AI Coalition, a member of the EAIF. “I think it would be a mistake to say, ‘Okay, I’m only talking to the people that understand it.’ That would be an underestimation of the impact of AI.”
Abbou, meanwhile, is openly frustrated about the fraught discourse that sometimes surrounds generative AI. In his interactions with stakeholders and policymakers, he says he’s heard plenty of people spout the plotlines of Hollywood movies. “Of course we have to talk about risk,” he says, “but can we stop talking about Terminators and Matrixes?” AI, in Abbou’s clearly exasperated terms, is just “a freaking tool.”
Ironically, he’s also afraid that some policymakers — both at home in Germany and in Brussels — remain almost completely closed-off to what he believes is the vast economic potentiality of AI. “To be honest,” says Abbou, “there are people out there who still want their mechanical typewriter back.”
This article originally appeared on Tech Monitor on 17 July