Elemendar has spent the past three years developing and training an artificial intelligence to read (and write) human-created, cyber threat intelligence reports and content. We’ve done this to drastically reduce the time organisations are vulnerable and we have learned a lot on the way.

I thought it might be useful and interesting to explain why we are doing it and what we’ve learned from training our AI to read (and write) threat intel.

Why we are doing it

Despite the great strides in cybersecurity (from good practice to sophisticated defences) attacks are still succeeding. One reason is complexity, and one answer to the complexity was putting in SOCs. As SOCs evolved from logs to SIEMs to people watching SIEMs to orchestration tools, the common theme was overload and burnout. Did I hear anyone say “alert fatigue”?

Today there’s still too much for SOCs to handle and, as if the cybersecurity talent shortage wasn’t enough, we now have an issue with burnout and retention. As a former chairman of RSA has said, “there are too many things happening – too much data, too many attackers, too much of an attack surface to defend.”

CISOs are telling us that everyone knows SOCs are overwhelmed. From board questions to admin requests, to incident response, it’s no wonder they can’t keep up with the threat intel that leads to actionable insight and ultimately security. Elemendar’s own mentor from the UK’s National Cyber Security Centre, in whose Accelerator we started our journey, showed us exactly how this problem of Too Much Information manifests in Threat Intelligence.

There’s a well-established model, the intelligence cycle. Where the process gets stuck, as the volume and complexity of threat intel increases, is at four points in the human threat analyst’s workflow, during the middle three stages:

Intelligence Cycle

  1. Processing the incoming raw intel into a structured form that they can manipulate during analysis
  2. Correlating this new piece of intel with the context of the organization’s technology, history of security events, and broader threat landscape
  3. Reasoning to connect the dots on what this new intel, now in context, means (eg you can ignore it or you have an insight that you can implement)
  4. Starting to orchestrate the organisation’s response by communicating that insight to the internal intelligence customer (eg SOC).

When you’re depending entirely on people for this process, the delays at each of the four points add up. And from Collection, when you get a threat intel report, to Communicating the intelligence, when you know what to do, can be up to several weeks depending on how the delays interact.

Paul Midian (CISO at Easyjet) calls this the worry gap. It may be a vulnerability gap, but you don’t actually know whether you’re vulnerable until the Communication point, where the threat analyst shares their findings, so you worry. Maybe it’s the uncertainty gap. Or the Rumsfeld gap, because it’s a known unknown.

Whatever you want to call it, it’s a problem. Going full machine isn’t an option, so you could address it with an army of analysts whose numbers need to grow linearly with the volume of intel – but that’s not scalable.

Or, you could use our AI to help solve the problem. With AI reading your threat intel, the whatever gap can shrink from weeks to hours. You worry less. You’re not vulnerable as long. And your team isn’t burning out.

Instead, we clear the backlog and process your threat intel so you get:

  1. Understanding: What is the nature of the threat?
  2. Context: Am I vulnerable?
  3. Magnitude: If so, how vulnerable?
  4. Priority: If seriously vulnerable
    a) How does this compare to everything else that’s going on, and
    b) What’s the kill chain?
  5. Action? Of those options, which ones kill all branches of the attack tree, are the fastest to deploy, and have the least impact on the business?

And because Elemendar writes the digested threat intel in machine-readable format, it can go directly into your threat intel platform and/or SOAR/SIEM. That’s what makes the insight actionable, quickly.

Not only are you protected sooner, your people are free to do the important work that AI can’t do. They have time to explore kill chains, assess impact to the business, determine how best to coordinate with ops or IT, etc

What we’ve learned from training our AI to read (and write) threat intel

A number of people we talked to about this said what we learned was very interesting; it may also be helpful to those of you who run or are building up your threat intel capability (whether you’re training people, machines, or both).

Speaking of people, as for many of us, English wasn’t the first language that I learnt. But most commercial threat intel is written in English, so you’d think that learning English would help our AI to read it, just like it helped me work in the security field. That’s not what happened though – a big surprise.

We discovered that for an AI that’s starting to learn how to read, analysts write threat intel in English in three ways which make it not actually English:

  1. They use more words. A sentence is 22 words long on average, compared to 12-17 in typical English.
  2. They keep adding new words. Emotet, Zerologon and REvil aren’t in the Oxford dictionary for example, and everyday researchers coin more such new, meaningful words.
  3. They change the meaning of words, so creating homonyms for the linguists here. In English the words ‘Fancy Bear’ align with fashion and the zoo – but in threat intel, it means APT and Russia.

This meant that the machine was better off learning cyber first and English second. Which is good news for threat intel teams in non-English speaking places everywhere!

Another interesting thing happened after about a year of training. The AI, a not very bright but very diligent student, started finding mistakes in learning material made by its much smarter, but humanly fallible, teachers. This showed us some important lessons for better human-machine teaming and effective use of threat feeds.

You won’t be surprised to hear the human-machine teaming lesson fits with Bruce Schneier’s model of Security Orchestration and Incident Response, in how “certainty demands automation” and “uncertainty demands initiative”. For example, mislabelling a hash type is not a risk for a machine, but a trivial human error which can poison the AI’s learning in serious ways. There isn’t enough threat intel data for machine learning to afford such errors, so it’s important to play to the different strengths of humans and machines.

The same goes for the effective use of feeds. You have enterprise security tools, so you are purchasing threat intel whether you use it or not. Your tools can use threat intel effectively when it’s high quality. But it’s really hard for us humans to understand what makes high quality intel for a machine to use. There are important aspects of quality which are not on the product data sheet for a feed.

It’s not about how many sources, objects, or threats it covers, it’s about how much of that coverage can be used by another machine automatically. For example, PDF reports are common: beautiful typography for humans to read, big problems for machines to understand how pictures, tables and footnotes fit in the text. Or if you work with STIX2, think strings in descriptions and sparse sub-labelling – that’s bad and also hard to spot. So when you select your threat intel sources, consider that both your machines and people need to read them.