“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” Donald Rumsfeld
The above quotation has become the stuff of legend over the past few decades. Equally derided for overcomplexity, but also celebrated within the intelligence community as a detailed description of the intelligence planning process. Let’s break it down…
|1||There are known knowns||Established intelligence reporting|
|2||There are known unknowns||Intelligence Requirements|
|3||Unknown unknowns||Intelligence Gaps|
It’s the 3rd category – the unknown unknowns – that keep people working within the field of intelligence up at night. It’s within this category that the 9/11 attacks on the World Trade Centre and Stuxnet infection of the Iranian Nuclear program reside.
With the flurry of interest in Artificial Intelligence, an obvious application of AI would be in the area of reducing the unknown unknowns. So, what could be a workable strategy for applying AI into this area?
What is the Limiting Factor around our Understanding of unknown unknowns?
Before we situate AI, it’s worth considering why unknown unknowns are such a challenge for the field of intelligence? While it’s obvious that unknown unknowns are undesirable – – why do they exist in the first place? We would propose that there are two possible factors that create the unknown unknowns zone, namely:
- Lack of creativity to explore the unknown unknowns space: At their core, unknown unknowns are simply adversary courses of action (CoA) that have not been conceptualised.
- Lack of resources to validate unknown unknowns: Once a CoA has been conceptualised, it needs to be validated and ranked according to some scale. Most obviously from totally implausible to highly likely in terms of % chance of occurrence.
We strongly suggest that it’s the second factor outlined above. Lack of resources to validate unknown unknowns – that equals the main limiting factor for enumerating the unknown unknowns zone.
To validate this assessment, let’s take for example the 9/11 attacks on the World Trade Centre by Al Qaeda. This often held up as the example of an unpredictable event that came from the unknown unknowns zone. However, it was predicted in detail by a right wing conspiracy theory lunatic Alex Jones.
Jones in July 2001 described a detailed plot of a false flag event that featured the CIA framing Osama Bin Laden around an attack on the World Trade Centre involving hijacked airliners. And he wasn’t the only one…in March of 2001 the X Files spin off, The Lone Gunman featured another plot to attack the World Trade Centre with hijacked airliners.
So, what can we take from this within the context of unknown unknowns? Were Jones and The Lone Gunman creator Chris Carter “in” on a 9/11 conspiracy? Or are they secret intelligence geniuses that should really be on the payroll of one the US intelligence agencies? Is Alex Jones right? – No, to all the above.
What the 9/11 example shows is the true limiting factor around enumerating unknown unknowns. Resources in time and person power are required to investigate them rather than the creativity required to generate them. In plain terms, it’s easy enough to generate a huge number of possible unknown unknowns but, the real challenge comes from applying the appropriate human resources to risk assess them.
Implications for the Application of AI to unknown unknowns
It’s often been proposed that AI could be used to identify previously overlooked adversary Courses of Action. This would propose a number of technical challenges as many of the results of AI models such as Large Language Models (LLMs) are based on data that has been fed into the model. As such the results may be new to you, but they are not truly innovative due to the nature of the data being sourced.
But creativity around generating ideas to fulfil the unknown unknowns zone is not really the challenge here. It’s the freeing up of human resources needed in devoted time to this activity at an organisational level.
This is the area where AI technology could really shine, and specifically in the area of structuring and correlating data. This process of structuring and correlation of data is often one of the main activities of any intelligence organisation and it’s usually a labour intensive but vital activity.
AI could be used to do much of this “heavy lifting” of data driven intelligence work, freeing the human analyst to allocate more time to conceptualising and validating a larger number of theories to fill the unknown unknowns zone. The limiting factor of the process of validation could then be better serviced by AI and other data tools.
Humans are great at storytelling – Alex Jones is a living testament to one man’s ability to spin an endless…endless…raft of stories about the possible future that we might (but probably won’t) be living in. This type of human creativity is what is needed to fill the unknown unknowns zone. As such, why do we need to train an AI to do a job that carbon based life forms are much better at doing? Instead, we should be using tools such as Elemendar’s READ. to do the heavy lifting of intelligence work, such as correlation over big data that humans are not so great at.