Paul Trebe
Paul Trebe
February 12, 2026
February 12, 2026
X
min read

Let’s start with the uncomfortable truth.

Plenty of companies are migrating off Datadog and saving 30 to 50 percent. On paper, that looks like a win. The budget drops. Finance is satisfied. Leadership moves on.

But I think most teams are solving the wrong problem.

If your telemetry grows 50× or 100× over the next few years, and in the AI era it will, saving 50 percent on today’s bill does not fix the economics. You have negotiated a better price on a model that is structurally misaligned with where your architecture is headed.

The Pattern I See Repeating

Here is what typically happens.

A company reaches scale and observability costs cross into the millions. They are sending 60 percent of their logs. Indexing 2 to 5 percent of their traces. Lower environments are turned off. Sampling rules are aggressive. Every feature discussion quietly includes cost impact.

Leadership decides to find a cheaper Datadog.

They evaluate Coralogix, Chronosphere, Dash0, Observe. These platforms are modern and often OpenTelemetry native. They position themselves as more efficient alternatives.

And often they are cheaper on day one.

But the pricing still revolves around volume. You pay per GB ingested, per span generated, per metric data point, per indexed event, per retention window.

Different packaging. Same growth linkage.

If your AI workloads generate 10×, 50×, or 100× more traces and logs, your bill scales accordingly. Saving 50 percent today while telemetry grows 100× tomorrow is not a long-term strategy. It is a temporary relief.

The Hidden Tax on Engineering

The invoice is not the only cost.

Volume-based pricing changes behavior. When engineers know every span and log line increases cost, they begin optimizing for bill control instead of clarity.

They sample more than they want to.
They shorten retention windows.
They hesitate to enable deeper instrumentation.

I have seen teams avoid turning on visibility because they did not want to trigger a five-figure overage. That is not alignment between platform and engineering. It is a tax on building.

AI systems amplify this dynamic.

LLMs in production generate payload-heavy inference logs. Agent workflows create nested spans. Vector database calls stack up. Retry storms multiply events. Streaming outputs expand trace volume quickly.

AI systems do not produce incremental telemetry growth. They produce step changes.

If your pricing model charges per GB, per span, or per indexed event, your bill grows in direct proportion to innovation. The more context you capture, the more you pay.

That is structurally misaligned.

The Structural Flaw

Traditional SaaS observability models assumed telemetry would grow predictably. Infrastructure expanded, data followed, costs increased steadily.

AI breaks that assumption.

Telemetry growth in 2026 and beyond is not linear. A single AI feature launch can multiply trace volume in weeks. Model drift analysis demands longer retention. Agent-based architectures increase call depth and frequency.

When vendors monetize ingestion and retention, they are monetizing growth itself.

Even the modern alternatives still shape and monetize volume. Some give you better controls. Some help you drop data more intelligently.

But you are still shaping telemetry to manage cost.

You are still engineering around your pricing model.

The Architectural Shift

The only sustainable approach is to decouple cost from telemetry volume.

That requires a different architecture.

In a fully managed Bring Your Own Cloud model, the backend runs in your cloud account. You pay the actual infrastructure cost for storage and compute. There is no SaaS markup layered on top, and no charge per GB ingested, per span generated, or per indexed event.

Cost scales with infrastructure footprint, not with every additional signal your AI system produces.

That changes incentives immediately.

Engineers can send all the data. They can retain context for model evaluation. They can instrument deeply without worrying about an overage surprise. Economics stop constraining visibility.

Context improves as well.

OpenTelemetry is necessary, and we support it fully. But it is not a complete strategy. Manual instrumentation requires developers to predict where they will need visibility and maintain that instrumentation over time.

An eBPF-based sensor operates at the kernel level. It sees network calls, system calls, application calls, and database interactions. It captures request and response context automatically. It generates deep traces without configuration.

Deploy a single Helm chart in Kubernetes and you have logs, metrics, and traces with immediate depth.

In a volume-priced SaaS model, that level of telemetry would be financially risky. In a fully managed BYOC architecture without ingestion taxes, it becomes practical.

Open Source Compatible. Not Locked-In.

This shift does not mean abandoning open standards. In fact, we support Prometheus, OpenTelemetry, and we use OTTL inside our pipelines.

We do not fight the ecosystem. We extend it.

There are environments where external telemetry pipelines and SaaS control layers will continue to make sense. Large legacy enterprises anchored to older stacks often rely on those tools to bridge gaps in DevOps maturity.

But modern builders who are AI-native, Kubernetes-first, and cost-aware are increasingly questioning why they are paying a SaaS markup for telemetry plumbing.

When a fully managed BYOC architecture eliminates ingestion taxes, removes volume pricing, supports open standards, and delivers deep eBPF context out of the box, the economics change.

Incentives align.

And the observability tax starts to look like what it is: a model built for a pre-AI world.

A Fair Perspective

If you are a smaller organization with predictable telemetry growth and limited AI workloads, volume-based pricing may not hurt you yet. Traditional SaaS observability can still function in that environment.

There is also a swap cost. Migration requires effort and coordination.

But if you are building AI-native systems, operating Kubernetes at meaningful scale, and expecting telemetry to expand significantly over the next few years, the inflection point arrives quickly.

When it does, incremental discounts will not solve it.

The Evaluation Lens

This is not about finding a vendor that is 30 percent cheaper this quarter.

It is about asking a structural question.

If our telemetry grows 50× over the next three years, does our pricing model survive without forcing us to sample more, retain less, or limit visibility?

If the answer depends on controlling volume, then you are still paying an observability tax.

In the AI era, observability has to scale with innovation. Not penalize it.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.