top of page
Deidre Brock
MP for Edinburgh North and Leith

Deepfake: The edge of a precipice


In 2024, over two billion people around the world will go to the polls. There are elections this year in the United States, India, Mexico, Venezuela, the EU parliament, and (almost certainly) the UK, to name a few.

This major meeting of political cycles comes just as fears are ramping up about the spread of disinformation and misinformation, and the dangers they pose to democracy.

These are issues I’ve raised at Westminster for many years, especially when it comes to campaign spending and transparency, and the loopholes which allow shady vested interests to influence our politics. My SNP colleagues and I also pushed for tougher measures in the Online Safety Act - we can’t just leave it up to big tech to police themselves, especially when they can rake in profits from fake news.

Taking proactive steps is difficult though when the technology is evolving at such a rapid rate. Tools like DALL-E, ChatGPT, and Bard have made it easier than ever for anyone to create deepfake images, audio, and videos. It’s no surprise these have quickly become rife in the political arena.

In November, a clip did the rounds falsely depicting the Mayor of London Sadiq Khan suggesting Remembrance Sunday should be delayed, sparking a violent backlash among those convinced of the need to “protect the Cenotaph”. And across the Atlantic, thousands of voters in New Hampshire received a surprise call apparently from President Biden, discouraging them from voting.

These are just the tip of a very big iceberg, and we can expect to see lots more in the months ahead. Polly Curtis of the think tank Demos warns we’re standing on a precipice, with generative AI potentially as disruptive as the internet and social media.

It’s argued the ongoing transformation creates opportunities to reach new audiences with personalised content. There’s also growing awareness however of the risks of this technology, which I hope will focus minds and make us more vigilant. Data will undoubtedly be used in ways that spread falsehoods, bias and discrimination, and there’s a real danger fake news and content becomes so widespread that folk mistrust everything.

Most manipulated or fabricated material comes from the wild west beyond the political mainstream, but there are state-sponsored threats too. A report by the Canadian Government found at least a quarter of national elections globally have been targeted by some kind of foreign interference. China and Russia were seen as especially active, using increasingly sophisticated ways to influence elections. The report worryingly found it ‘very likely that the capacity to generate deepfakes exceeds our ability to detect them’.

We’re also seeing AI and fake news playing significant roles in conflict zones. Propaganda videos by Russia have slipped past barriers set by governments and tech companies, pushing Kremlin conspiracy theories blaming Ukraine for civilian casualties and claiming that people in annexed areas have welcomed their occupiers.

Indeed, I was recently at an OSCE Parliamentary Assembly debate on AI’s hybrid security threats, where a Ukrainian rep spoke about the huge resources they’re piling into counter-disinformation. Another key message was that no government can tackle these challenges alone - civil society is absolutely crucial.

Finland is leading the way here with a coordinated drive involving public institutions, the private sector, NGOs, civil society, and citizens, to tackle disinformation and misinformation. The Finns have integrated media literacy into their core national curriculum as well as wider public programmes, and the country has repeatedly topped the Media Literacy Index in recent years.

The Scottish Government is taking similar steps where it can, integrating digital and media literacy into the education curriculum, for example. Most of the powers to regulate AI are reserved, though, and I’m concerned the UK government’s approach is too hands-off and out of step with other countries.

Last month, for example, the US Federal Trade Commission, led by the impressive Chair Lina Khan, started consulting on plans to ban impersonation of individuals via AI tools. Surely we need to be looking at similar action here?

The UK’s AI strategy should also be revised with a more collaborative, ‘whole-society’ focus, and I support the idea of a commissioner with the authority to carry out these much-needed measures.

In the meantime, politicians need to make sure we’re not contributing to the problem. Demos’ new Generating Democracy report includes recommendations that all political parties should get behind, including being transparent and clear about their use of AI in the election, and not re-sharing anything they suspect to be false.

This should be the bare minimum as we brace for such a consequential year, both politically and technologically. ■

Twitter: @DeidreBrock


Tools like DALL-E, ChatGPT, and Bard have made it easier than ever for anyone to create deepfake images


I'm a paragraph. I'm connected to your collection through a dataset. Click Preview to see my content. To update me, go to the Data

I'm a paragraph. Click here to add your own text and edit me. It's easy.


I'm a paragraph. Click here to add your own text and edit me. It's easy.

Xyxyyxyx xyxyxyyxyxy xyxyxyxy


bottom of page