Quick signals
What this product actually is
Application error tracking and performance monitoring focused on code-level debugging. Stack traces, release health, session replay. Developer plan free; Team $26/mo; Business $80/mo.
Pricing behavior (not a price list)
These points describe when users typically pay more, what actions trigger upgrades, and the mechanics of how costs escalate.
Actions that trigger upgrades
- Error volume exceeds 5K events/month free tier — Team plan at $26/month covers 50K events
- Team needs session replay to debug frontend issues — available on Team plan and above
- Organization requires SSO and advanced access controls — Business plan at $80/month
When costs usually spike
- Event quota is shared across all projects — one noisy service can consume quota intended for critical applications
- Performance monitoring transaction quota is separate from error quota — both need monitoring to avoid overages
- Data retention is 90 days on Team plan — historical analysis beyond 90 days requires Business plan or data export
Plans and variants (structural only)
Grouped by type to show structure, not to rank or recommend specific SKUs.
Plans
- Verify current pricing on the official website.
Costs and limitations
Common limits
- Not a general-purpose monitoring platform — no infrastructure metrics, no log management, no server monitoring
- Performance monitoring (tracing) is useful but shallow compared to dedicated APM tools like Datadog or New Relic
- Event volume pricing can surprise teams — a single bug loop can generate thousands of events and consume quota quickly
- Issue grouping sometimes merges distinct bugs or splits related ones — requires manual merging/splitting for accuracy
What breaks first
- Event quota consumed by a single bug loop or noisy service, blocking error capture for other critical applications
- Error grouping accuracy degrades with complex async stack traces — teams spend time manually managing issue groups
- Performance monitoring transaction limits hit before error limits when teams instrument all API endpoints
- Alert fatigue from default notification settings — teams need to configure alert rules and routing to avoid noise
Decision checklist
Use these checks to validate fit for Sentry before you commit to an architecture or contract.
- Unified platform vs best-of-breed tools: How many signal types do you need today (metrics, traces, logs, errors)?
- Cost model: per-host vs per-GB vs per-event: Is your host count stable or does it scale 3-10x during peaks?
- Data portability vs vendor convenience: How important is it that your dashboards and alerts survive a vendor change?
- Upgrade trigger: Error volume exceeds 5K events/month free tier — Team plan at $26/month covers 50K events
- What breaks first: Event quota consumed by a single bug loop or noisy service, blocking error capture for other critical applications
Implementation & evaluation notes
These are the practical "gotchas" and questions that usually decide whether Sentry fits your team and workflow.
Implementation gotchas
- Data retention is 90 days on Team plan — historical analysis beyond 90 days requires Business plan or data export
- Fast SDK setup (15-30 minutes) → shallow performance monitoring compared to dedicated APM
- Performance monitoring (tracing) is useful but shallow compared to dedicated APM tools like Datadog or New Relic
Questions to ask before you buy
- Which actions or usage metrics trigger an upgrade (e.g., Error volume exceeds 5K events/month free tier — Team plan at $26/month covers 50K events)?
- Under what usage shape do costs or limits show up first (e.g., Event quota is shared across all projects — one noisy service can consume quota intended for critical applications)?
- What breaks first in production (e.g., Event quota consumed by a single bug loop or noisy service, blocking error capture for other critical applications) — and what is the workaround?
- Validate: Unified platform vs best-of-breed tools: How many signal types do you need today (metrics, traces, logs, errors)?
- Validate: Cost model: per-host vs per-GB vs per-event: Is your host count stable or does it scale 3-10x during peaks?
Fit assessment
- Any development team that needs application error tracking — Sentry is the default choice regardless of what infrastructure monitoring you use.
- Frontend teams building React, Vue, or mobile apps that need session replay and user-impact analysis alongside error tracking.
- Teams shipping frequently (daily or multiple times per day) that need release health tracking to catch regressions immediately.
- You need infrastructure monitoring, server metrics, or log management — Sentry doesn't cover these. Pair it with Datadog, Grafana, or New Relic.
- You need deep distributed tracing with service maps and latency analysis — Sentry's performance monitoring is a complement, not a replacement for APM.
- Your error volume is extremely high (millions/day) without sampling — costs scale with events and can exceed budget without rate limiting.
Trade-offs
Every design choice has a cost. Here are the explicit trade-offs:
- Best error tracking in category → not a general monitoring platform (no infra, no logs)
- Fast SDK setup (15-30 minutes) → shallow performance monitoring compared to dedicated APM
- Session replay for frontend debugging → additional event quota consumption and privacy considerations
- Free Developer plan → event volume limits push teams to paid tiers quickly in production
Common alternatives people evaluate next
These are common “next shortlists” — same tier, step-down, step-sideways, or step-up — with a quick reason why.
-
Datadog — Step-up / full-stack monitoringDatadog provides error tracking as part of a full monitoring stack — the alternative when you want one vendor for everything, though Datadog's error tracking alone is less mature than Sentry's.
-
New Relic — Step-up / full-stack with error trackingNew Relic includes error tracking within its full-stack platform — better when you want APM + errors + infrastructure in one tool and don't need Sentry's specialized error grouping.
-
Honeycomb — Step-sideways / debugging distributed systemsHoneycomb focuses on debugging distributed system behavior through high-cardinality event exploration — different from Sentry's error-tracking approach but solves similar 'why did this break' questions.
Sources & verification
Pricing and behavioral information comes from public documentation and structured research. When information is incomplete or volatile, we prefer to say so rather than guess.
Something outdated or wrong? Pricing, features, and product scope change. If you spot an error or have a source that updates this page, send us a correction. We prioritize vendor-verified updates and linkable sources.