We live in a world that, thanks to technology, is better connected and has greater access to information than ever before. Accelerated by the proliferation of smart devices and social media, this revolution has changed the way we live. However, its purpose so far has been to hold our attention, rather that to create a better society.
As a result, people are now both better informed and more easily misled: empowered, but also more vulnerable. However, we now recognise that we no longer have to accept there is nothing we can do about the negative aspects of the central role technology plays in our lives if we also want to keep the benefits. The power and the wealth that the big tech companies have created for themselves must also come with greater responsibility towards the safety of their users.
The UK is trying to strike a balance with world-leading online safety legislation, and as chair of the Joint Committee on the Online Safety Bill, I’m working with an expert group of MPs and peers to help us achieve that aim. Often, it is argued that the equilibrium can be found by simply requiring platforms to work with the police to better enforce existing
laws. However, we need to remember that social media companies are not just the hosts of content posted by their users. The algorithms they have designed to keep us online for as long as possible are also being used to promote legal but ultimately harmful content in users’ feeds at an unprecedented scale. These tools have played an active role in the virulent spread of, for example, anti-vaccine conspiracy theories and the incitement of criminal events, such as the Capitol Hill riots in Washington DC at the start of this year.
We live in a liberal democracy. This is why it’s right that our elected representatives should get to establish the principles that regulate the online world. But we also live in a liberal economy, and the private sector, of course, has a role to play too. Beyond parliament, digital companies themselves are helping to find the right balance.
On my Infotagion Podcast I’ve been discussing this with a number of entrepreneurs and innovators who are looking beyond the attention-based business models of the personalised ad-tech driven giants. Instead, they are creating digital services that seek to improve the user experience online and promote safety, in a way that remains privacy-focused. Crisp Thinking, Factmata, Logically and SafeToNet are all UK companies harnessing the power of artificial intelligence to analyse, classify and mitigate the impacts of abuse and disinformation on mainstream platforms.
That isn’t to say the big beasts of tech don’t have a role to play too. Adobe’s Content Authenticity Initiative is helping to clearly identify the origin of online content and whether it’s been edited. Sir Tim Berners-Lee’s Inrupt is building tools to allow users to keep their personal data on independent servers, rather than on those run by Facebook, Google and Twitter. And Washington state, following calls from two of its most successful companies, Microsoft and Amazon, has passed a law requiring independent testing of new application programming interfaces (APIs) for facial recognition technologies.
I’m convinced that the Online Safety Bill, combined with measures to encourage competition in digital markets, is ushering in a new era of private innovation. This time it will promote safety by design, rather than by afterthought.