AI technologies and their influence on the 2024 elections

Published
2025-09-17T14:06:00.731+02:00 04 July 2024
Business Digital Intelligence
Location United Kingdom
The scope of Artificial Intelligence capabilities has increased significant over the last 18 months. Unfortunately, this applies to malicious use cases as much as legitimate ones.

This is leading to concerns that AI tooling could invite new AI-enabled attack techniquesIn the context of elections, the acceleration in quality and accessibility of generative AI (GenAI) makes it important to understand and prepare for its misuse. However, this is the first major election cycle since GenAI began to markedly proliferate, meaning the ways in which threat actors may use AI tools for interference purposes is unclear.

Given the scale of the global 2024 election cycle, we’ve taken a closer look at the current and possible applications of AI technologies and assessed their possible impact.

Download our report AI risks to the 2024 election cycle outlining current and possible applications of AI technologies and assessing their possible impact

Is AI an election threat?

From a malicious activity perspective, AI tooling offers three core advantages to attackers:

  • Speed: increasing the speed at which operational activities can be carried out
  • Scalability: enabling malicious activities to be amplified and operate at scale
  • Accessibility: lowering the barrier to entry for attackers

 

But, with 2024 being the global year of elections, the key question is how significant will the impact of GenAI be?

Public assessments argue that GenAI capabilities will likely not introduce new risks for the 2024 election cycle. AI has been used in some malicious operations, but these activities are unlikely to drastically impact voter preferences or alter vote counts.

However, AI may amplify existing risks to the election process – primarily through election interference and delivering tactical benefits in the following areas:

 

Improved social engineering

The greatest concerns surrounding GenAI relate to synthetic content creation and its impact on voters. As AI-generated content has proliferated, distinguishing between factual and false information has become increasingly difficult.

Numerous countries with elections this year have reported at least one political deepfake in 2024, and several have warned that "various threat actors will likely attempt to use AI-produced content to influence and sow discord". This proliferation of inauthentic, misleading material risks degrading the information environment and eroding public trust.

To complicate this, 2024 has seen an uptick in political campaigns using AI-generated content to bolster their cause. This adds to the blurring of boundaries between legitimate campaign content and disinformation.

Enhanced cyber operations

Several threat actors have been observed using AI in their general cyber operations. However, at the time of writing there have been no reports of actors using AI in cyber operations targeting election infrastructure specifically.

Most obviously, AI technologies can enhance reconnaissance-stage techniques. This includes improved information gathering to developing social engineering materials, as well as vulnerability scanning.

AI has also been described as removing the operational burden, or 'grunt work', associated with cyber activity. This includes using ChatGPT to debug code for malicious websites and conduct more general research such as how to apply for developer accounts on social media or asking for summaries of social media posts.

 

Attacks on AI systems

GenAI ecosystems themselves may offer an attack vector for threat actors. In 2024, two techniques have been particularly cited with reference to election security.

The first of these is Prompt Injection. This involves threat actors crafting malicious prompts that make Large Language Models (LLMs) act in unintended ways – such as leaking sensitive data or spreading disinformation – and includes vulnerabilities in widely used chatbots.

The second technique is Data Poisoning, which involves attackers compromising an LLM's training dataset in order to influence or manipulate its operation. These activities can introduce bias, create erroneous outputs, introduce vulnerabilities, and influence the decision-making or predictive capabilities of a model in other ways.

 

What’s the impact of AI on election integrity?

Although AI can present a threat to elections in 2024, the gains are likely to be mainly tactical – meaning the speed and scalability of existing techniques may be 'AI-enhanced'.

It should be cautioned that AI is a tool, and the threat it poses to elections depends on which threat actors are using it and for what purpose. Evidence of AI use in connection with election interference from 'high-end' actors remains sparse, with current evidence pointing to rudimentary applications by lower-capability threat actors.

It is unclear how the threat AI poses to elections and democracies more generally will evolve in the coming years, highlighting that the problem is more complex than any single organisation is capable of solving alone. Effective collaboration between private industry, government, media and the wider public will therefore be required to counter it.

Download our new report AI and the 2024 election cycle to learn more about the AI risks to the 2024 election cycle.

AI technologies and their influence on the 2024 elections icon image

Take a look at our new report AI risks to the 2024 election cycle outlining current and possible applications of AI technologies and assessing their possible impact.

Explore our Threat Intelligence Insights

Understand the evolving threat landscape is a key part of maintaining robust defences. BAE Systems' Threat Intelligence team generate original insights through research and collaboration with customers and partners

Get in touch
BAE Systems Team