As a former SOC analyst, I used to spend a reasonable amount of my time figuring out what to learn next while the screens were getting filled with yet another unappealing event.
This was during the time RSS feeds were a big deal and reading blog posts or papers were still part of my weekly, sometimes daily routine (quite sporadically now). I am sure many of you have been or perhaps are still finding yourselves in the same situation.
But isn't it wiser to leverage the shift's time for improving your skills or learning new things rather than clicking another false-positive (FP) away?
Well, you may not admit in public, but that's what happens when a curious, eager analyst is confronted with a screen full of poor alerts to triage.
He or she is probably asking "What the hell am I doing here?", "What the developer had in mind when throwing this alert to me?".
Reminds me of the following:
You take the blue pill, the story ends. You wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.
Good analysts are averse to repetitive, manual tasks, especially if they are  not optimized and  prone to the same (negative) outcome, over and over. Same goes for security alerts handling.
So what has changed?
Hopefully the current scenario has shifted as Security Operations is getting access to better tooling while organizations are starting to realize Security Monitoring practice lies at the core of any Security Program.
More important, machine data (log) availability is the norm. New technology in? New logs out. Making sure the logs will be ingested by the Big Data platform is highly ranked in the Security Architect's checklist.
Even though it's challenging, the good news is that there are better opportunities to trigger interesting, appealing security alerts.
As long as your SIEM engineer (or should I say developer?) is able to craft a nice rule or query; extracting the value or insight from the logs is clearly easier than before. And your grep or regex chops still count — a lot.
Apart from alerting, putting "Threat Hunting" skills in practice is also easier— as long as your hunter has the ability to work on good hypothesis, interactive dashboards and become fluent in the SIEM's query language.
SIEM Content = Code
IMO, there's no way to build a Security Monitoring capability without putting a great deal of effort on development of new, custom content (use cases: rules, interactive dashboards, etc). And to get there, research is where it starts, just like vendors do, or sometimes try.
Besides under-utilizing the platform ($), the full chain of people involved is also under-utilized ($$). From the analyst, triaging an alert; to the customer (internal or external), receiving an escalated (low-quality) alert.
As a side note, the more your SIEM or Big Data platform embraces the power of customization, the better the chances of succeed. And here's my hat tip to platforms like Splunk or ELK, with full development appeal, enabling easy customization on top of default content.
No wonder why labeling those platforms as SIEMs is a bit understated. But that's another topic…
There are many ways to minimize the chances of poor alerts. From the definition of the goals or scoping (use cases), to the process of handing over new content to the SOC (paving the way for good feedback and new ideas).
Standardization of processes (use cases demand intake, code development) is needed as in any other method that involves continuous delivery (or you thought the use cases from yesterday would suffice?).
Coaching and enabling the team to be fluent in the SIEM language is crucial, this works as a force multiplier and needs to be part of your strategy.
If analysts are still not engaging on alerts (major input), perhaps it's worth reviewing your approaches. Besides not achieving customer/org visibility, chances are analysts are about to be part of the statistics (SOC turnover).