For cyber defenders, generative AI holds a tantalising promise. Who wouldn’t relish the thought of a tool that can make intelligence workflows effortless and streamlined? 

Take threat intelligence platforms (TIPs), which are currently essential for connecting the data, the analyst and the workflow they need to follow. But while powerful in the hands of an expert, TIPs in general aren’t user friendly. You need to be trained as an analyst. You need to load in all the reports and data, and then you need to spend extra time querying the data to get the answers you want. 

Potentially, generative AI could simplify this phase of the intelligence cycle. If you have the information from original sources, and a structured and verifiable way to retrieve it, then why do you need a TIP? If you can just ask a natural-language question and get an answer based on your sources, then you don’t need to transform it into an explicit structure —like STIX— to work with. 

However, Generative AI by itself doesn’t yet offer any polished and seamless solutions; its weak spots make such promises more hype than reality. But, if aimed in the right direction, and its weak spots mitigated, it can already be a valuable tool for today’s cyber defenders.

Generative AI and Explainability

The big challenge right now is that although generative AI can understand STIX, for example, it can’t provide explainability. Without reasoning or understanding the meaning of what it generates, generative AI’s use of statistical patterns and probabilities to produce the most likely output leads to errors, biases, and inconsistencies. 

Even if we use guardrails and documentation to monitor and control the training process, we still can’t guarantee that the model will behave as expected in every context. 

Control Issues

Another challenge is that generative AI is expensive, in every way. So, it’s developed and hosted by large corporations or other organisations that have access to massive amounts of data and resources. This leads to certain delicate issues: centralisation and sovereignty of the models powering generative AI, the owners and controllers of these models, and protection of their users’ privacy and security. 

Consider that every commercial generative AI leader is a US company with assets in many countries, including the UK and greater Europe. How do they comply with the different regulations and standards of each region? How do they ensure that their models are fair and transparent to their customers? And how do they handle potential conflicts of interest or misuse of their models? Every enterprise will have different approaches and tolerance levels, and mass-market deployment models can’t cater to them all.

Finding The Use Case for Cyber Defence

Here’s a suggestion for our industry: Instead of fawning over the tech (and I’m saying this as a techie!), we should home in on the use case and ask some hard questions, in a curious and open-minded way. What value —what benefits— can generative AI give us that our current AI, which has better explainability features, cannot? If we need generative AI to be more than just a distraction, in what circumstances can it outshine current AI? 

By focusing on those use cases, we can already be tapping into the potential of this evolving tool — one that may become a permanent part of the cyber defender’s arsenal.