Disinformation and artificial intelligence (AI) are two topics that are quickly becoming interlinked. The boss of the UK’s Government Communications Headquarters (GCHQ), Jeremy Fleming, recently commented that AI could lead to an ‘explosion’ in disinformation. 

Another concerning observation came from the credible Gordon Crovitz, Co-chief executive of NewsGuard: an organisation that tracks disinformation. He described one prominent AI tool as ‘the most powerful tool for spreading misinformation that has ever been on the internet’.

Former Google employee Geoffrey Hinton, the so-called godfather of AI, recently took these claims one step further, commenting that AI technology ‘will allow authoritarian leaders to manipulate their electorates, things like that’. Although we do not necessarily disagree with Dr Hinton’s statement, we are focused more on a challenging question behind all these portentous predictions: How would AI be used to spread disinformation?

Inside the Machine of AI-Powered Disinformation 

Large language model (LLM)–based AI tools, such as ChatGPT are not going to spread disinformation themselves. No more than Google’s search engine would autonomously spread disinformation. AI-based software is undoubtedly incredibly powerful. But much of the debate around the potential for AI to impact disinformation is fundamentally limited in its narrow focus on content generation. 

To understand AI tools’ potential impact, we need an analytical framework. The Disinformation Kill Chain, shown below, is apt for this purpose. 

Figure 1: Disinformation Kill Chain as envisaged by MITRE Corporation

For those unfamiliar, the seven-step kill-chain model helps break a disinformation operation campaign down into discrete steps that can be used for deeper analysis. So, what insight can we draw from combining the Disinformation Kill Chain with concerns about the influence of AI? Shown below is our analysis of some of the most obvious ways AI could be applied across the seven phases of an influence operation.

Figure 2: Possible applications of AI corresponding with the Disinformation Kill Chain

An Autopsy of the AI Disinformation Kill Chain

Although Figure 2 is a high-level example, subject to variation, there are four conclusions we can draw:

Multiple tools are needed. The seven stages outlined in Figure 2 demand seven distinct AI tools, accepting their own data inputs and producing very different outputs. Only one phase (Seed: Generate large volumes of multimedia…), is obviously recognisably linked to MML tools, such as ChatGPT, and image generators, like Midjourney. 

The level of technical challenge varies. Some of the seven procedures outlined in Figure 2 are more technically challenging to achieve. Although all AI is complex, some of the concepts are demonstrated within technology we commonly use. But many others remain mostly conceptual, such as the very subjective tools performing gap-analysis tasks in the Recon and Effect stages. Such tools far more closely mirror a wider set of human decision-making processes that are challenging to model.

The Human is crucial to the chain. The human disinformation peddler remains firmly in the loop (and, arguably, still does most of the heavy lifting) in the model shown in Figure 2. The lion’s share of this effort goes toward the transactional handling of the output from one phase and input into the next. However, there is still one key stage of a disinformation operation that is not represented in the Kill Chain in Figure 2: the human-dependent conceptualisation of the mission. (Read on!)

The Mission Conceptualisation phase is missing. The Disinformation Kill Chain is useful but it does not represent the very human decision to undertake the operation in the first place. The ‘should I/ can I?’ phase that is so integral to an information operation is the most difficult to develop an AI model for. It would require sizable developments within the field to turn concepts into reality.  

Figure 3: The complete Disinformation Kill Chain

Don’t Worry, Be Wary

To paraphrase some denizens of the professional networking site LinkedIn: You will not lose your job to AI, but you might lose your job to someone using AI. In our view, this also accurately sums up the current state of applying AI to the propagation of disinformation. The output from current LLM models, although impressive, is confined to just one of the eight defined steps of the Disinformation Kill Chain. 

This situation may change, but we believe that, for the time being, AI’s full impact on disinformation is yet to be fully realised.