Table of Content
x min
October 26, 2025

Metric Cardinality Explained: Why High Cardinality Impacts Observability

October 26, 2025
Groundcover Team
October 26, 2025

When you set up monitoring for your systems, metrics quickly become one of the most important tools for understanding what is happening. They help you see how different parts of your system behave in real time and make it possible to respond when something goes wrong. But as your infrastructure scales, the amount of metric data grows fast, and with it comes a new challenge: handling the number of unique time series behind those metrics.

High metric cardinality is one of the most common reasons monitoring systems slow down, cost more, and lose visibility when you need it most. In some cloud-native environments, the number of unique series can reach into the millions, which can make queries slower and dashboards less reliable. Understanding why this happens and how to manage it is critical for building stable observability systems.

What is Metric Cardinality?

Metric cardinality is the number of unique time series that a single metric creates. A time series is a sequence of data points collected over time for one specific combination of a metric and its labels. These labels, such as region, endpoint, or status code, add context to a metric and make the data more meaningful and easier to analyze.

When a metric has no labels, it produces only one time series. But when labels are added, each unique combination of label values creates a separate series. To calculate metric cardinality, use the following formula:

Metric Cardinality = Number of unique values for Label 1 × Number of unique values for Label 2 × ... × Number of unique values for Label N

For example, a metric tracking HTTP requests with the labels method and status will generate one series for each pair of values. If the method has two values (GET and POST) and the status has three values (200, 404, 500), that results in 2 × 3 = 6 unique time series.

Time series breakdown of HTTP requests, categorized by method and status codes

As more labels are introduced, this number increases quickly.

Cardinality gives you a practical way to see how metrics multiply as your system grows. It also sets the stage for the challenges and decisions that come with collecting, storing, and analyzing large volumes of monitoring data.

Metric Cardinality Examples in Monitoring and Observability

Once you start collecting metrics in a real system, cardinality becomes more visible. A single metric may look manageable on its own, but when labels reflect the structure of your infrastructure, the number of time series increases quickly. Here are a few common scenarios:

Web Applications and Distributed Systems

In web applications, it is common to track HTTP request metrics and label them by endpoint, service, and region. Monitoring 20 endpoints across 5 services and 300 virtual machines can create around 30,000 unique time series. The rapid growth comes from how labels multiply as the system expands, not from the number of metrics themselves.

This becomes even more pronounced in globally distributed architectures such as auto-scaling microservices, Kubernetes clusters, or serverless platforms. Dynamic scaling can generate unique labels for each container, pod, function, or region. In a serverless setup processing millions of requests, frequent deployments may add new function_version labels, pushing series counts into the millions.

IoT Deployments and Edge Networks

In IoT systems, it is common to label metrics like temperature or battery level with a unique device_id. Ten thousand sensors can create ten thousand unique time series from a single metric. Large-scale edge networks can intensify this problem because devices frequently connect and disconnect, which increases the variety of labels. 

Machine Learning and Multi-Tenant Platforms

Machine learning workloads often rely on labels such as model_version, experiment_id, or training_job_id. A single training run can produce thousands of time series, especially during large hyperparameter tuning experiments. Without control, these metrics can overwhelm monitoring backends. 

Multi-tenant SaaS platforms face similar issues. Labeling metrics like API latency with tenant_id for 100,000 tenants can lead to 100,000 time series from one metric alone. 

This shows the number of unique time series is shaped less by the number of metrics you collect and more by how you label them. This is why controlling labels is just as important as deciding which metrics to track.

Why Metric Cardinality Matters: Key Benefits

Even though high cardinality can create challenges, it also gives you more ways to understand what is happening inside your system. Each additional label provides another lens for analyzing performance, behavior, or reliability. When used with purpose, it can make your monitoring far more useful.

  1. More precise visibility
    Labels allow you to break metrics down into meaningful segments. Instead of seeing only the total number of requests, you can view them by region, service, or endpoint. This makes it easier to detect issues that affect a specific part of your system without losing the bigger picture.
  2. Faster problem detection
    With more detailed metrics, you can spot patterns that would otherwise stay hidden. For example, a rise in error rates in one region might not change the global average much, but it becomes clear when metrics are labeled by region.
  3. Better performance analysis
    High-cardinality data lets you measure how different parts of your system behave under real conditions. You can analyze latency by endpoint, request volume by service, or resource usage by pod. This level of granularity makes it easier to understand where slowdowns or failures begin.
  4. Targeted alerts and dashboards
    Labels make it possible to build alerts and dashboards that focus on specific parts of your system. Instead of alerting on the overall error rate, you can trigger alerts only for the affected service or region, which helps reduce noise and speeds up response.

Take a look at the following visualization and notice how a low-cardinality metric (global error rate) hides a regional anomaly. But the high-cardinality segmentation reveals the EU’s 10% error rate.

Chart showing how a low-cardinality metric (global error rate) hides a regional anomaly

Cardinality gives you flexibility. When used well, it turns raw metric data into something actionable. But the same flexibility that brings these benefits also leads to problems if left unmanaged.

Common Causes of High Metric Cardinality in Observability Data

Some labels add useful detail, while others introduce unbounded growth that can overwhelm a monitoring system. High cardinality usually starts with a few small design choices that compound as your infrastructure scales. Here are some of the most common causes:

  • Dynamic or unbounded labels
    Labels that take on many unique values, such as user_id, session_id, IP addresses, or timestamps, can generate thousands or even millions of series from a single metric. These values change frequently, which steadily pushes the total cardinality upward.
  • Container and microservice environments
    Labels like container_id or pod_name change whenever new instances are created. In platforms that scale automatically, new labels appear constantly, which makes cardinality grow even if the underlying metrics stay the same.
  • Debugging and temporary labels
    Labels added during development or troubleshooting sometimes stay in production. Fields like trace IDs or build numbers may seem harmless, but can expand the number of series without adding long-term value.
  • Full or raw values as labels
    Using raw data in labels, like a complete URL path or filename, creates a new series for each distinct value. Normalizing these values, for example, replacing /users/12345/profile with /users/{user_id}/profile, keeps the number of series predictable.
  • User-defined or uncontrolled tagging
    When teams or tools can freely add custom labels, the total number of unique combinations can grow without anyone noticing. This often leads to unexpected spikes in cardinality.

These causes usually appear gradually and can turn into a major problem when infrastructure or traffic grows. 

Challenges of High Metric Cardinality

High metric cardinality doesn’t just make your data harder to manage. It can directly affect the performance, cost, and reliability of your observability system. Here are some of the challenges it brings:

  • Heavier storage and indexing
    Every unique series must be stored and indexed. As the number grows, the system needs more disk space and memory to keep up. Even with compression, indexing millions of series consumes a large amount of resources, which can strain your monitoring stack.
  • Slower data ingestion
    When new labels appear, the time series database has to create and index new series. If this happens at scale, ingestion slows down, and in some cases, data may be dropped when the system can’t keep up with the load.
  • Longer query times
    Queries that have to scan or aggregate across many series take longer to run. Dashboards that were once responsive can become sluggish, making it harder to detect and respond to incidents quickly.
  • Increased costs
    Many monitoring platforms charge based on data volume or the number of active series. High cardinality can lead to significantly higher bills without delivering more actionable insight.
  • Operational complexity
    Managing millions of series adds overhead. Teams have to spend time tuning the database, pruning labels, and fixing dashboards instead of focusing on improving the system itself.
  • Data gaps and reduced visibility
    Some platforms enforce strict limits on the number of active series. When those limits are reached, new data may be dropped or rolled up, which leads to blind spots in monitoring and makes investigations harder.

As you have seen, these challenges grow gradually and can be easy to overlook until they begin to affect performance or reliability. That’s why having a clear strategy to manage cardinality is just as important as collecting the right metrics in the first place.

How to Manage Metric Cardinality Effectively

You can’t avoid high cardinality entirely, but you can control it with a clear strategy. Managing it isn’t about collecting fewer metrics. It’s about structuring them in a way that gives you useful insights without creating unnecessary series that slow down your monitoring stack or increase cost.

Be deliberate with labels

The first step is deciding which labels truly matter. Labels that help you detect and resolve issues should stay, while those that add little value should be avoided. A small change in labeling decisions can significantly reduce the number of time series over time.

Aggregate where it makes sense

Instead of labeling metrics with highly specific values, group them into categories. Tracking traffic by region, for example, is far more efficient than labeling by individual IP address. This still gives useful visibility while keeping the number of unique series manageable.

Use buckets for continuous values

Metrics like latency or payload size can generate endless unique values. Turn these into buckets, such as 0–100 ms or 100–200 ms. This keeps the series count predictable and still shows performance trends clearly.

Monitor your cardinality itself

Keep an eye on the number of active series to help you catch problems early. A sudden spike usually points to a new label or a change in how metrics are being generated, making it easier to correct the issue before it becomes expensive.

Limit label sources

Restrict where labels can come from to help prevent uncontrolled growth. For example, blocking user-defined or auto-generated labels from going directly into production metrics ensures that only reviewed and useful labels are used.

Use sampling or rollups when needed

For high-volume metrics, sampling a portion of the data or rolling it up at a higher level can ease the load on your monitoring system. This approach keeps the most important information while reducing storage and processing overhead.

Managing cardinality effectively comes down to being intentional. With a few structured practices in place, your monitoring system remains reliable, fast to query, and cost-efficient even as your infrastructure grows.

Best Practices for Reducing and Controlling Metric Cardinality

Managing cardinality works best when it becomes part of your monitoring design, not just a reaction to problems after they appear. By following consistent practices, you can keep your system stable, your queries fast, and your costs predictable as you scale.

Avoid unique identifiers in labels

Labels like user_id, session_id, or container IDs create massive numbers of unique series with little long-term value. These identifiers are better stored in logs or traces, which are designed to handle that level of detail.

Normalize dynamic values

When a label contains unique or changing values, replace those specifics with a pattern. For example, /users/12345/profile can be normalized to /users/{user_id}/profile. This keeps the data structured and the series count predictable.

Keep labels consistent

Small inconsistencies like us-west versus uswest create separate series unnecessarily. Defining clear naming rules for labels helps avoid unintentional growth caused by inconsistent data.

Monitor and audit regularly

Regularly check how many series exist for each metric to help catch problems early. If a single metric has an unexpectedly large number of series, it’s a signal to revisit how it’s labeled.

These practices work together to keep metric data structured and manageable. By applying them early, you reduce the risk of performance slowdowns and unplanned cost spikes while still preserving the detail needed for meaningful monitoring.

How groundcover Simplifies Metric Cardinality Management

The strategies covered so far give you practical ways to manage metric cardinality, but groundcover is designed to handle high cardinality at its core. Its approach makes it possible to collect detailed metrics without constantly worrying about label explosions, unpredictable costs, or query slowdowns as your systems scale.

No hard limits on cardinality

groundcover doesn’t impose a fixed limit on how many active series you can store. When your infrastructure grows and new labels appear, the system continues to ingest and process the data without discarding series or rolling them up. This matters because high cardinality is not always a sign of poor metric design. In some cases, you need that level of detail to properly understand system behavior. By removing this ceiling, groundcover lets you keep the visibility you need without compromise.

Fixed, predictable pricing

In most observability setups, cardinality growth directly impacts cost, since the more series you generate, the more you pay. groundcover’s pricing model is tied to the number of nodes rather than the number of series or the volume of data stored. This makes costs stable even as your system scales or as you add new labels to your metrics. You can keep the data you need without having to reduce cardinality just to stay within budget.

eBPF-based data collection

groundcover uses eBPF (Extended Berkeley Packet Filter) technology to collect telemetry directly from the kernel. This approach doesn’t require you to change application code or insert manual instrumentation. By gathering metrics at the kernel level, groundcover can observe system activity across your environment in real time. This not only captures detailed data but also reduces the operational overhead of managing how data is collected.

Stream processing for efficiency

A key part of groundcover’s architecture is how it processes data before it reaches long-term storage. Instead of writing every raw data point to a database, it uses stream processing to handle metric ingestion. This allows it to process and organize large volumes of high-cardinality data while keeping queries responsive. Even with millions of series, the system remains stable because the heavy lifting happens during ingestion, not during queries.

Full visibility without pruning

In many monitoring setups, engineers have to reduce labels or trim metrics to keep systems performant. groundcover removes that pressure. You can maintain detailed labels such as region, endpoint, service, or pod without risking performance degradation or hidden data gaps. This ensures that when you investigate an issue, the level of detail you need is always available.

groundcover’s model shifts how you think about cardinality. Instead of focusing on how to restrict it, you can focus on how to use it. This gives you the flexibility to build richer monitoring setups while keeping the system stable, predictable, and easy to operate.

FAQs

1. What is the relationship between metric cardinality and query performance?

Query performance is directly affected by how many unique time series a system has to scan. When cardinality is high, each query needs to process more data, which can make dashboards slower and increase response times. Keeping cardinality manageable means the system can return results faster and stay responsive under load.

2. How can engineering teams detect unexpected spikes in metric cardinality?

The most reliable way is to monitor the number of active time series over time. A sudden jump usually points to new labels or changes in how metrics are generated. Setting up alerts on series counts or using built-in monitoring tools to track cardinality trends helps teams catch problems early, before they affect performance or cost.

3. What are the trade-offs between retaining detailed metrics and reducing cardinality?

Keeping detailed metrics gives deeper visibility, but it increases storage and processing demands. Reducing cardinality makes systems faster and cheaper to operate, but can limit the level of detail available during investigations. The best approach balances the two: keep detail where it adds value, and simplify where it doesn’t.

Conclusion

Metric cardinality plays a central role in how your monitoring system scales, shaping performance, cost, and the depth of insight you can gain from your data. Thoughtful labeling and structure help prevent problems before they grow too large to manage. As for groundcover, its architecture removes many of the usual constraints around cardinality, allowing you to keep detailed metrics without losing visibility or stability. This gives you the space to focus less on managing limits and more on understanding and improving your systems.

Make observability yours

Stop renting visibility. With groundcover, you get full fidelity, flat cost, and total control — all inside your cloud.