APT 28, APT 1, Conti, Lazarus Group and Anonymous are all names that have become ‘poster children’ for cyber threats over the past decade or so. Identifying, describing, and monitoring threat actors/groups is indeed a cornerstone of modern Cyber Threat Intelligence (CTI). Many security service providers differentiate themselves by the number of threat groups they are able to successfully track. For this reason it is reasonable to then wonder, why are we not paying the same kind of attention to studying disinformation?
When it comes to the field of threat and vulnerability analysis, disinformation is the new kid on the block. It sits somewhere around the intersection of CTI, physical security, and geopolitical analysis. As the concept develops, there is an ongoing effort to transfer threat models from related fields, such as CTI, to the study of disinformation. (Check out the table at the end of this blog for a handy list).
But even with useful borrowed models at hand, a lack of application still seems to be strangely missing from the analysis of disinformation. Where is the focus on categorising specific threat actors/groups?
Cyber threats 101: Categorisation of Actors
Cyber threat actors, at the most basic level can be herded into three master categories – cyber criminals, nation-states and cyber activists.
Figure 1: The ‘classic’ Big 3 categories of malicious cyber-actors
Although simplistic, this model provides a useful tool for analysts, because it comes with inherent truths, such as:
- Nation-state threat actors (e.g. APT 28, APT 1) are more sophisticated than cyber-criminals (e.g. Conti), who are, in turn, more sophisticated than cyber-activists (e.g. Anonymous).
- Nation-state actors are motivated to fulfil the requirements of their patron state. However cyber-criminals are motivated by grift, and similarly cyber-activists by political issues.
- Some crossover exists among these groups (e.g. some cyber-criminals act on behalf of nation states, such as Icefog).
While there are of course exceptions to the three ‘rules’ outlined above, the general consensus is that they provide a base of truth for more complex CTI analysis.
At Elemendar, we feel that this level of clarity is missing from the field of disinformation. Analysts should be using such well-defined threat-actor categories to analyse the spread of disinformation.
Super Spreaders: Disinformation Analysis
We can certainly map certain disinformation-spreading threat actors or groups to the Big 3 categories outlined above. Check out Figure 2 for six examples.
Figure 2: Six disinformation groups mapped to the Big 3 threat-actor categories
But many more threat actors and groups are actively involved in disinformation. They are clearly differentiated from each other, and they also conform to the three basic ‘rules’ outlined earlier. So, isn’t it time to let disinformation analysis live up to its full potential, by routinely applying such a model to the spreaders?
Deeper Insight and a Pathway Forward
The benefits of the Big 3 model, or others, are immediate to a decision maker facing a disinformation campaign. They gain enhanced insight via the model’s simplistic truths. For example:
‘A state-backed group, like GhostWriter, presents more of a threat than a cyber-criminal group, like Doctor Zhivago. Therefore, I should allocate more resources to countering GhostWriter than Doctor Zhivago.’
This type of logic, although it may seem incredibly basic, tethers analysis to proven tenets of the threat landscape. And that kind of quick reasoning pays off in a high-speed crisis situation.
It also paves the way for deeper analysis. Take, for example, the application of Charity Wright’s Diamond Model for Influence Operations to Team Jorge, shown in Figure 3.
Figure 3: Diamond Model applied to the Team Jorge disinformation group
The analysis shown above is only marginally more specific than the Big 3, but it does offer deeper insight into the Team Jorge group.
Two key points are tied to applying the Big 3 model to disinformation.
Firstly, structure is king; the basic building blocks of such a model can easily unlock deeper levels of insight. Secondly, the Big 3 is just the beginning; as shown in the table below, many other models used by CTI and allied fields can be directly applied to the field of disinformation analysis. Let’s give due attention to the new kid on the block.
Quick Reference guide to CTI concepts mapped to equivalent disinformation analysis frameworks:
|Cyber Threat Intelligence Framework
|Diamond Model for Intrusion Analysis
|Diamond Model for Influence Operations
|Disinformation Kill Chain
|MITRE ATT&CK Framework