The role of ‘Novelty’ and ‘Behaviour’ in Computer Forensics & Detection Engineering
This is just another quick blog that could not fit in a tweet which is hopefully inspiring for for all Detection Engineering teams out there.
First off, let's get the definitions loud and clear:
Novelty: the quality of being new, original, or unusual.
Behaviour: the way in which one acts or conducts oneself.
Those are perhaps the main traits we should consider when designing or engineering a detection system.
One action, multiple traces
I used to say Forensics happens after the fact while Detection should happen right after the fact and both are really challenging!
Let alone Prevention which should happen before the fact! But that's a story for another post…
Just like in Forensics, the Locard’s principle applies here: Every contact leaves a trace. Attackers will both bring something with them and will leave with something from the system.
Our challenge is to find out which artifacts tie to that principle.
That brings us to the next point.
🧪 What's the BEST Detection lab?
The answer is pretty simple. The same as in Computer Forensics: the crime scene! (The Who starts playing)
Despite all the Attack Emulation tools, there's nothing better than analyzing logs from a real incident — in your network! Some organizations leverage that practice as part of an Incident Debrief.
In these debriefing sessions the idea is to digest and review what has happened, what was the impact, how the system was restored, lessons learned, etc.
In some cases the computer or network devices affected are part of a Computer or Digital Forensics investigation.
What is usually missing? Why not leveraging these instances for spotting new detection use cases? Isn't that enough of a driver to prioritize a new detection design and implementation?
🧠 What's the mindset for this to work?
Besides engaging with Incident Responders (sometimes Service/Helpdesk) teams, once you get a hold of the information, you need to determine a quick strategy, especially if we are talking about too much data.
No need to mention curiosity here but after some years in the field, I believe the main soft skill needed is opportunism.
We need to find a sweet spot, just like attackers do without tipping off any detection. Or in case it does, blending into normal operations, so that SecOps overlooks it as another legit, benign event.
A detection engineer must find a single or a subset of traces that somehow distinguishes it from normal operations while trusting the attacker will not count on those being brought to anyone's attention as red flag.
📘 A few questions to ask yourself before designing a new detection from an incident debrief
When crafting a query or a rule (assuming log-based detection here) and given you are starting from an incident's logs, here's a quick list:
What is likely happening for the first time here?
Is there any new:
- Parent/child process relationship? (baseline, start with LOLBins)
- External Web resource accessed? (build your own ‘Alexa’ rank)
- Archives, Executables dropped?
- Service or Task created?
Which notable behaviors are seen?
- Registry manipulation
- File copy (split by system/temp/user dirs)
- Service/Task manipulation
- Post-exploitation commands
- Latmov commands
- Is the command part of an elevated (admin) session?
The first part is pretty much around anomaly detection and the ability to baseline your systems and users. The second part is more about prioritizing behavior matching over static pattern matching (IOCs).
Both lists are endless! This is just a glimpse on what is possible. Most, if not, all of them at some point are mapped to an ATT&CK technique.
While it's better to work on top of real threats targeting a network you are defending, leveraging sanboxes and malware samples also works here.
And since I am walking the talk, depending on the data sources at disposal, you can leverage an approach I am referring to as SIEM 'Hyper Queries'. You can read the first part here.