Quick signals
What this product actually is
Unified monitoring platform combining infrastructure metrics ($15/host/mo), APM ($31/host/mo), and log management ($0.10/GB/day) with 750+ integrations. Breadth is the selling point; cost compounds as you add modules.
Pricing behavior (not a price list)
These points describe when users typically pay more, what actions trigger upgrades, and the mechanics of how costs escalate.
Actions that trigger upgrades
- Team adds APM on top of infrastructure monitoring — cost doubles from $15 to $46/host/month
- Log volume exceeds the included retention — log management bills can exceed infrastructure costs for log-heavy applications
- Security monitoring add-on ($23/host/month) required for compliance — adds another pricing tier on top of existing stack
When costs usually spike
- Custom metrics beyond the included 100/host are billed at $0.05/metric/month — high-cardinality instrumentation can generate thousands of custom metrics
- Log retention defaults to 15 days; extending to 30+ days doubles the storage cost per GB
- Indexed logs (searchable) cost more than archived logs — teams often discover they need indexed logs after setting up archival-only pipelines
Plans and variants (structural only)
Grouped by type to show structure, not to rank or recommend specific SKUs.
Plans
- Verify current pricing on the official website.
Costs and limitations
Common limits
- Cost compounds quickly: infrastructure ($15/host) + APM ($31/host) + logs ($0.10/GB) + synthetics + security = $80-150+/host/month fully instrumented
- Per-host pricing penalizes auto-scaling environments — a fleet that scales from 10 to 100 hosts during peaks costs 10x more
- Log management pricing at $0.10/GB ingested per day makes high-volume logging expensive compared to Grafana Loki or self-hosted ELK
- Vendor lock-in is real: custom metrics, dashboards, and monitors don't export cleanly to other platforms
What breaks first
- Monthly bill exceeds budget when team enables APM + logs + security across all hosts — typical for teams that start with infrastructure-only and expand
- Auto-scaling cost spikes during peak traffic when host count triples and per-host billing follows
- Custom metric cardinality explosion when developers instrument application-specific metrics without governance on label dimensions
- Log ingestion costs spike when a verbose application or debug logging is left enabled in production
Decision checklist
Use these checks to validate fit for Datadog before you commit to an architecture or contract.
- Unified platform vs best-of-breed tools: How many signal types do you need today (metrics, traces, logs, errors)?
- Cost model: per-host vs per-GB vs per-event: Is your host count stable or does it scale 3-10x during peaks?
- Data portability vs vendor convenience: How important is it that your dashboards and alerts survive a vendor change?
- Upgrade trigger: Team adds APM on top of infrastructure monitoring — cost doubles from $15 to $46/host/month
- What breaks first: Monthly bill exceeds budget when team enables APM + logs + security across all hosts — typical for teams that start with infrastructure-only and expand
Implementation & evaluation notes
These are the practical "gotchas" and questions that usually decide whether Datadog fits your team and workflow.
Questions to ask before you buy
- Which actions or usage metrics trigger an upgrade (e.g., Team adds APM on top of infrastructure monitoring — cost doubles from $15 to $46/host/month)?
- Under what usage shape do costs or limits show up first (e.g., Custom metrics beyond the included 100/host are billed at $0.05/metric/month — high-cardinality instrumentation can generate thousands of custom metrics)?
- What breaks first in production (e.g., Monthly bill exceeds budget when team enables APM + logs + security across all hosts — typical for teams that start with infrastructure-only and expand) — and what is the workaround?
- Validate: Unified platform vs best-of-breed tools: How many signal types do you need today (metrics, traces, logs, errors)?
- Validate: Cost model: per-host vs per-GB vs per-event: Is your host count stable or does it scale 3-10x during peaks?
Fit assessment
- Cloud-native teams running 50-500 hosts that want a single vendor for infrastructure, APM, and logs without stitching together open-source tools.
- Organizations where the engineering team values pre-built integrations and fast setup over cost optimization and data portability.
- Teams running Kubernetes workloads that need container-aware monitoring with auto-discovery and orchestrator-level visibility.
- Your host count fluctuates heavily with auto-scaling — per-host pricing makes cost unpredictable during traffic spikes.
- You generate more than 100GB/day of logs — Grafana Loki or self-hosted ELK will cost a fraction of Datadog's log management pricing.
- You need data portability — Datadog's proprietary formats and query languages create switching costs that grow with adoption.
Trade-offs
Every design choice has a cost. Here are the explicit trade-offs:
- Unified platform convenience → vendor lock-in on dashboards, alerts, and query languages
- 750+ pre-built integrations → per-host pricing that compounds across modules
- Fast initial setup → cost surprises as instrumentation expands across the stack
- Rich Kubernetes monitoring → per-container billing complexity in ephemeral workloads
Common alternatives people evaluate next
These are common “next shortlists” — same tier, step-down, step-sideways, or step-up — with a quick reason why.
-
New Relic — Same tier / consumption-based alternativeNew Relic offers comparable full-stack coverage but with per-GB pricing instead of per-host — better for teams with many small services or unpredictable host counts.
-
Grafana Cloud — Step-sideways / open-source managed alternativeGrafana Cloud provides metrics, logs, and traces on open-source foundations (Prometheus, Loki, Tempo) — better data portability at the cost of less pre-built integration polish.
-
Honeycomb — Step-sideways / high-cardinality debuggingHoneycomb is the alternative when your primary need is debugging complex distributed systems with high-cardinality data — exploratory queries rather than pre-built dashboards.
-
Sentry — Complement / application error trackingSentry handles application error tracking and code-level debugging better than Datadog's error tracking — most teams run both for complete coverage.
Sources & verification
Pricing and behavioral information comes from public documentation and structured research. When information is incomplete or volatile, we prefer to say so rather than guess.
Something outdated or wrong? Pricing, features, and product scope change. If you spot an error or have a source that updates this page, send us a correction. We prioritize vendor-verified updates and linkable sources.