Azure Log Analytics pricing: ingestion, retention, and query costs

Log Analytics costs scale with volume. The safest estimate is a simple pipeline: events * size -> GB ingested -> retention storage -> query scans. If your estimate is wrong, it is usually wrong because one log source is much bigger than you think.

0) Inventory log sources (do not blend)

List sources and estimate them separately. Blending "average log size" across everything hides the real drivers.

  • Ingress/proxy/access: extremely high volume, moderate size.
  • Firewall/WAF: can be high volume and high size (rule metadata).
  • Audit: often lower volume but required retention.
  • Application logs: volume depends on log level and request volume.

1) Ingestion (GB)

Start with an event rate and average payload size. Multiply to get bytes/day, then convert to GB/day and GB/month. If you have multiple sources, estimate each source with its own size and event rate.

Tool: Log ingestion cost calculator.

  • If you only have RPS, map requests to log events (e.g., 1 access log per request, plus error logs).
  • Keep a peak scenario: incidents can multiply logs (retries, errors, verbose logging).

2) Retention (GB-month)

Retention is a storage multiplier. As a rule of thumb, if you ingest X GB/day and retain N days, you hold roughly X * N GB of data (on average). Long retention on high-volume sources is the fastest way to grow cost.

Tool: Retention storage calculator.

  • Use short retention for "debug noise" and long retention only where policy requires it.
  • Split retention by table/source if possible (audit vs access logs).

3) Query and scan costs (dashboards + alerts)

Query cost depends on how much data you scan and how often. Dashboards and alert rules can scan far more data than you expect, especially with wide time windows and frequent refresh.

Tool: Log search/scan calculator.

  • Prefer queries that narrow early (filter by service, status, or route) instead of scanning everything.
  • Validate refresh intervals: a dashboard refreshing every minute is 1,440 scans/day.

Worked estimate template (copy/paste)

  • GB ingested/day = sum over sources(events/day * bytes/event)
  • GB ingested/month = GB/day * 30 (baseline + peak)
  • Avg retained GB ~= GB/day * retention_days (split by source)
  • GB scanned/month = scans/day * GB/scan * 30 (dashboards + alerts)

Common pitfalls

  • Turning on high-volume sources without sampling real payload size first.
  • Using one blended event size that hides a single dominant source (ingress/firewall).
  • Keeping long retention everywhere by default.
  • Dashboards/alerts scanning wide windows repeatedly (query cost explosion).
  • Verbose logging during incidents (peak ingestion dominates the month).

How to validate

  • Sample real log payloads for top sources and compute bytes/event.
  • Validate which diagnostic sources are enabled (ingress/firewall/audit) and estimate them separately.
  • Validate retention windows per source (short for noise, long for policy-required logs).
  • Audit dashboard refresh intervals and alert query windows (reduce repeated wide scans).

Related tools

Sources


Related guides


Related calculators


FAQ

What usually drives Log Analytics cost?
Log ingestion volume (GB) is usually the biggest driver, followed by retention. Query cost matters when you scan lots of data frequently.
How do I estimate quickly?
Estimate events per second, average event size, and retention days. Convert to GB/day and then to GB/month.
What is the most common mistake?
Turning on high-volume diagnostic sources (ingress/firewall/audit) without modeling their payload size, then keeping long retention by default.
How do I validate?
Sample real log payloads, validate which sources are enabled, and validate dashboard/alert query windows and refresh rates.

Last updated: 2026-01-27