
Generative AI is advancing at an unprecedented pace, offering new creative possibilities while also raising urgent ethical questions. The challenge isn’t just about managing risks – it’s about ensuring that people can trust the digital content they interact with every day.
Generative AI has revolutionised content creation and productivity, but it has also made deceptive content easier to produce and, with the growth of social media platforms, it’s easier to spread too. Deepfakes in particular blur the line between truth and fabrication, with potential consequences for democracy and public trust. AI can drive economic growth and innovation, but without responsible innovation and regulatory oversight, it could also undermine credibility in online spaces. Addressing these risks demands concrete action from governments, businesses, and citizens alike.
A recent New Statesman podcast, sponsored by Adobe, tackled these concerns. A panel of experts debated AI’s societal impact and what’s needed to ensure responsible development. They explored the push-and-pull between AI’s potential and its dangers, the effectiveness of regulation, and how to equip people to navigate a digital landscape where deception is increasingly sophisticated.
Stefanie Valdés-Scott, Head of Policy and Government Relations for EMEA at Adobe, emphasised AI’s power to unlock creativity. “It’s easier to use the products, and it can really provide productivity gains,” she said. At the same time, she acknowledged the darker side: the same tools that enable innovation also make it easier to produce misleading content at scale, eroding trust in online media and even democratic processes: “The broader implication is actually that you begin to doubt everything that you see or hear online – even if it’s true.”
Labour MP Kanishka Narayan argued that Britain is at a crossroads in the AI space. “We have an opportunity to reassert ourselves as a global tech leader,” he said. “But without public trust, that advantage evaporates.” He stressed that AI’s challenges can’t be left to any one group – governments, tech companies, and citizens must work together to create safeguards that keep pace with the technology.
Henry Ajder, an expert on AI and deepfakes, offered a pragmatic perspective. While deepfakes are often portrayed as an existential threat, he pointed out that humans struggle to identify them accurately. “We’re not much better than a coin flip at spotting AI-generated content,” he noted. Instead of relying on people to become digital detectives, he advocated for stronger verification tools – embedding authentication measures into digital platforms so users don’t have to second-guess everything they see.
One proposed solution is a new technical standard called Content Credentials, which Valdés-Scott calls “A nutrition label for digital content.” The initiative, led by the Coalition for Content Provenance and Authenticity, aims to provide users with provenance data for digital assets.
Ajder says new initiatives like this will become increasingly important as AI-generated content becomes indistinguishable by even the most expert eye. “There will be come a point,” he says, “Where our senses basically end up being redundant.”
The conversation made one thing clear: solving AI’s trust problem isn’t just a technological challenge – it’s a societal one. Regulation, digital literacy, and the need for greater transparency all have a role to play. As Ajder put it, “We need to stop debating hypotheticals and start implementing real protections. The risks are already here, and so are the solutions.”
If AI is to be a force for progress, the focus must shift from merely acknowledging the dangers to actively mitigating them. That means building AI models that empower creators and protect their IP, whilst ensuring citizens have transparency tools to spot deceptive content will create confidence over scepticism. The time for abstract discussions is over – what matters now is action.