Log Sources and Telemetry

Summary

This note explains what security telemetry really means and why different log sources matter. The goal is to understand where visibility comes from before worrying about detections and alert quality.

Microsoft Sentinel data connectors view from official documentation

Official Microsoft Sentinel screenshot showing the data area where telemetry sources are connected and reviewed.

Why this matters

  • you cannot detect what you do not collect
  • endpoint, network, authentication, and infrastructure sources all show different parts of the story
  • better investigations start with better telemetry coverage, not just more rules

Environment / Scope

ItemValue
Topiclog sources and telemetry
Best use for this noteunderstanding visibility coverage
Main focusendpoint, network, auth, infrastructure logs
Safe to practise?yes

Key concepts

  • Telemetry - technical data collected from systems, services, and network activity
  • Log source - a system or service that produces data for monitoring
  • Coverage - how much useful visibility you actually have across the environment
  • Context - the extra detail that makes a log useful during investigation

Mental model

Different sources answer different questions:

Source typeWhat it helps answer
Endpoint logswhat happened on the host?
Authentication logswho tried to sign in and from where?
Network telemetrywhat connections and traffic patterns existed?
Infrastructure or platform logswhat did the service, appliance, or platform do?

One alert often becomes much stronger when multiple sources support the same story.

Everyday examples

SourceExample value
Endpoint telemetrySysmon events from Windows
Linux logsauth logs, process logs, service logs
Network telemetryZeek logs or firewall logs
Infrastructure logsUniFi syslog, cloud platform logs

Common misunderstandings

MisunderstandingBetter explanation
”One good log source is enough”strong investigations often need multiple viewpoints
”More logs always means more security”visibility improves only if the data is relevant and usable
”If the tool collects data, the field quality must be good”collection and good parsing are different things
”Detection problems always mean bad rules”sometimes the real issue is weak or missing telemetry

Verification

CheckExpected result
Sources are connectedlogs arrive from expected systems
Timestamps are usableevent times are consistent enough to correlate
Fields are meaningfuldata contains host, user, process, or network context
Coverage matches use casethe environment has telemetry for the detections you care about

Pitfalls / Troubleshooting

ProblemLikely causeWhat to check
Alerts feel blind or weakpoor source qualityfield quality, missing context
Correlation failstimestamps or host identity are inconsistenttime sync, naming, identifiers
Detections never firesource missing or wrong parseringestion, mapping, rule fields
Investigations feel shallowtoo little telemetry diversityendpoint plus network plus auth coverage

Key takeaways

  • telemetry quality matters as much as detection quality
  • different log sources answer different investigation questions
  • coverage and context are what turn raw logs into useful security evidence

Official documentation