Shahar Azulay
Shahar Azulay
March 16, 2026
March 16, 2026
X
min read

There's a specific kind of pain that every engineering team knows. Your Slack lights up. A customer screenshot. A user flow that's been broken for who knows how long, and you're the last one to find out about it.

That's the problem with reactive monitoring. You instrument your services, set up dashboards, wire up alerts, and then you wait. You wait for real user traffic to hit a broken endpoint, generate a failed trace, and trip a monitor. Only then do you know something is wrong.

The gap between when something breaks and when you find out is what I wanted to close. That's why we built groundcover Synthetic Performance Monitoring.

The blind spots in reactive monitoring

Most observability stacks are built on an assumption: user traffic is your signal. Metrics spike, traces pile up, errors appear, and that's how you know to look. It works under load. But there are entire classes of problems it just misses.

  • Low-traffic internal APIs that sit quiet for hours at a time  
  • Off-hours failures at 3 a.m. when nobody is hitting your app  
  • Subtle regressions where the status code is 200 but the JSON body is broken  
  • Brand-new endpoints that nobody has hit in production yet

None of these should be acceptable blind spots for teams running serious infrastructure. You shouldn't need a user to discover a problem before you do.

What it actually does

Synthetic Performance Monitoring runs scheduled checks that simulate real requests against your endpoints and alert you the moment something goes wrong, no user traffic needed.

The concept is simple: tell groundcover what to probe, how often, and what a correct response looks like. From there it continuously validates your services on your behalf. If a check fails, you know immediately. When it recovers, you know that too.

But the implementation has a few decisions baked in that I think make this genuinely different from other synthetic tools. I want to be clear about what those choices are.

Why checks run from inside your cloud

Most synthetic monitoring tools run probes from external data centers distributed around the world. That works fine for public-facing APIs. But it creates real problems for anything that isn't reachable from outside your network, which for most serious infrastructure is a lot of things.

groundcover Synthetic Performance Monitoring runs from within your own BYOC backend. The probe originates inside your environment, reaches internal services the same way a real microservice does, and your request data never leaves your infrastructure. Three things this unlocks immediately:

  • You can monitor endpoints that are completely private and unreachable from the public internet  
  • Checks simulate traffic as it actually flows through your network instead of arriving from an external IP  
  • Request payloads, headers, and credentials stay in your cloud

We had customers tell us that external synthetic tools were essentially useless for half their stack because everything was locked down inside a VPC. That constraint goes away here.

A 200 OK is not a passing test

It never has been. Real production failures often look like a service returning 200 with a broken body, a missing JSON field, or a response time that has quietly crept past any reasonable threshold. Status-only checks miss all of that.

So we built multi-layered assertion logic. For any test you can assert on:

  • Status code (equals, not equals, is one of)
  • Response headers (exists, contains, matches regex)
  • JSON body fields, including deeply nested values
  • Raw body text
  • Response time

Multiple assertions per test, and if any one fails, the check is marked failed. You can also import from cURL to auto-populate the request config in seconds. When you're setting up tests across a lot of endpoints, that matters.

Monitoring that runs itself

When you create a Synthetic Test in groundcover, a Monitor is automatically created and permanently bound to it. You don't configure alerting separately. You don't maintain two things that need to stay in sync. There's no configuration drift.

The monitor manages its own lifecycle: Pending, Firing, Resolved. Test starts failing, monitor fires. Test recovers, monitor resolves. You update the test (new target URL, different assertion, adjusted interval) and the monitor reflects it automatically. No double maintenance.

Route alerts to PagerDuty, Slack, Opsgenie, wherever you already route them. The goal was zero to alerting on a new endpoint in under two minutes.

Synthetic traces alongside your real ones

This is the part I care most about, and honestly what I think no other synthetic tool gets right.

Every synthetic check generates a full distributed trace. That trace lives in the groundcover Traces Explorer right alongside your real application traces. Not in a separate dashboard, not a siloed product, not behind a different login. It's the same explorer, the same filtering, and the same drill-down experience. You can isolate synthetic traffic with source:synthetics, attach custom labels, and correlate failures with backend logs and metrics instantly.

When a check fails, you're not looking at a red dot and a status code. You're looking at a trace. You see exactly what happened in your backend and you start debugging. That's what unified observability actually means, and synthetic checks shouldn't be an afterthought bolted on top of it.

Pricing

Synthetic Performance Monitoring is available on Pro, Enterprise, and On-Prem plans. Adding synthetic tests does not increase your license cost. groundcover prices on nodes, not on checks or monitors. Set up as many tests as you need.

Get started

If you're already on groundcover, navigate to Monitors > Synthetics and create your first test. It takes only a couple of minutes.

If you're new, start free. Your first synthetic check can be running within minutes of installation, no external probes, no separate alerting setup, no gap between your synthetic data and the rest of your observability stack.

We built this to close the gap between when something breaks and when you find out. I hope it helps.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.