False Positives and Tuning

Summary

This note explains what false positives are, why they matter, and how tuning improves detection quality over time. The goal is to understand that good blue-team work is not only about creating alerts, but also about making them useful.

Why this matters

  • too many noisy alerts make real triage harder
  • weak tuning damages trust in detections and in the SIEM itself
  • beginner blue-team work often improves most when alert quality improves, not when alert volume increases

Environment / Scope

ItemValue
Topicdetection quality and tuning
Best use for this noteunderstanding alert noise and signal quality
Main focusfalse positives, thresholds, exclusions, refinement
Safe to practise?yes

Key concepts

  • False positive - an alert that fires even though the activity is benign or not actually relevant to the detection goal
  • Tuning - improving the rule, threshold, exclusions, or supporting logic so alerts become more useful
  • Signal-to-noise - how much useful value an alert provides compared to how much distraction it creates
  • Threshold - a condition such as count, volume, or frequency that influences whether an alert fires

Mental model

An alert can be:

  • technically correct
  • but still operationally unhelpful

That usually means the detection is not wrong in a strict sense, but it is not tuned for the real environment.

Think about the flow like this:

telemetry -> detection rule -> alert -> triage outcome -> tuning decision

Tuning uses the triage outcome to improve future alerts.

Everyday examples

SituationLikely tuning thought
admin script triggers PowerShell alert every dayadd context or exclusions for known activity
many failed logons from a lab test hostadjust threshold or source-specific handling
one rule fires on many harmless background processesnarrow command-line or parent-process logic
alert has too little context to explain quicklyimprove fields or enrichment rather than only changing severity

Common misunderstandings

MisunderstandingBetter explanation
”A false positive means the rule is useless”often it means the rule needs refinement, not deletion
”More alerts means better security”more low-quality alerts often make real detection worse
”Tuning means lowering severity until the alert disappears”tuning should improve relevance, not hide problems blindly
”If one alert was noisy once, delete it”first understand whether the rule, source, or threshold is the real issue

Verification

CheckExpected result
Alert logic is understoodyou can explain why it fired
Benign pattern is identifiednoisy cause is clear enough to tune safely
Rule change is testedalert behaviour changes in the expected way
Signal improvesfewer noisy alerts without losing the useful ones

Pitfalls / Troubleshooting

ProblemLikely causeWhat to check
Alert keeps firing on normal activityrule too broadthresholds, exclusions, source context
Tuning removes useful detections too aggressivelyover-correctiontest cases before and after tuning
Team no longer trusts the alertlong-term noiserecent triage outcomes, recurring benign patterns
Same issue keeps returningtuning was local, not systemicsource quality, parsing, broader rule design

Key takeaways

  • false positives are normal, but they should lead to tuning rather than alert fatigue
  • detection quality matters more than alert volume
  • tuning is part of the detection lifecycle, not an optional extra