The idea of this post came after a Slack chat with Ryan Long, a Sr. Security Analyst who had asked this very question highlighted in the blog title.
Ryan as many others is realizing there’s sometimes a very thin line between analysis (operations) and engineering when it comes to threat detection.
Update: a fresher view on this topic was highlighted below:
Why is Threat Detection so trendy now?
Because the demand is higher? Because Cyber is becoming specialized? I wrote a blog post touching on this topic a few years ago. Let's start with some context first.
Machine Data & Modern Tooling
Logs, logs everywhere! Today we have too much data at our disposal, so much that we need to design data pipelines with routing, filtering and other pre-processing cycles before consuming it in the other end.
I remember working as an architect for a bank in the early 2000s when an item of my product assessments was "Does it provide logs?" Today, we ask "Does it ship JSON logs?", "Does it provide an API to extract logs?"
Any enterprise-grade system today must generate logs.
Besides AVs, what was actually detecting threats — as they happen?
Some years ago, data for detection was basically coming from the wire (NIDS, PCAP analysis) and Syslog — that was it! Going beyond that meant you were on your own, building a sort of log aggregator (shivers!) or custom agents.
The main reason (use case) for log collection used to be Forensics or Compliance. Who was proactively checking Windows Eventlogs 15 years ago? You can probably name a few of your heroes here.
Besides the lack of abundant log data, the ability to query on it was also limited to general programming languages or SQL.
Even some big names from SIEM industry have failed to provide easy and fast ways to question the data with very rudimentary interfaces.
I believe that's another big shift. IMO, platforms such as Elastic, Splunk and Microsoft's…