.png)
Escaping Datadog: How We Built an Automated Observability Migration Tool
Legacy observability tools make migration slow, costly, and risky. Learn how groundcover built an AI-powered Migration Tool to fix it—for good.
%20(1).jpg)
We just closed Q3 with the strongest pipeline in groundcover's history. Over 500% ARR growth. Hundreds of enterprise customers. A $35M Series B in April. By every measure, we're winning.
But we also lost three enterprise deals in the last 90 days.
Same story every time: excited buyer, clear ROI, executive sign-off. Then radio silence.
When I finally got them on the phone, the answer was always a version of the same thing: "We want to switch. The math makes sense. But we have 400 dashboards and 1,200 monitors. Our team estimates 6 months to migrate everything. We can't take that risk right now."
They stayed on Datadog. Paying $2 million per year. Because the fear of breaking observability during migration outweighed the pain of overpaying.
This isn't a sales problem. It's a product problem.
And today, we're announcing the solution: the groundcover Migration Tool. Fully automated, AI-powered, self-service migration from legacy Observability vendors to groundcover. What used to take 6 months and $200K in consulting now takes minutes and costs nothing.
But before I explain what we built, I need to explain why migration became the observability industry's dirtiest secret.
The Hidden Moat Around Legacy Observability
Here's what enterprise migration actually looks like today.
You start by manually exporting dashboards from Datadog. Not through an API that gives you clean JSON. Through the UI, one at a time, or by reverse-engineering their internal formats. Then you inventory your monitors. Your integrations. Your custom log pipelines. Your alert workflows. Your SLO definitions.
Next, you need to map metric names and other properties. Datadog calls something kubernetes.cpu.usage. Your new platform calls it kube_pod_container_resource_usage. Or maybe container_cpu_usage_seconds_total. There's no standard. You build a spreadsheet with hundreds of rows mapping old metric names to new ones.
Then you rebuild. Every dashboard from scratch. Every monitor. Every integration. You test in staging. You compare side-by-side with production. You find the 47 things you missed. You rebuild those too. You argue with your team about whether the new dashboard "looks right." You discover that one critical alert was using a Datadog-specific function that doesn't exist in the new platform.
Six months later, if you're disciplined and lucky, you're done.
But most teams aren't lucky. They hit problems:
Platform differences. No two observability platforms are identical, even when they claim feature parity. Datadog's dashboard widgets don't map cleanly to Grafana panels. New Relic's query language isn't PromQL. Subtle differences in how platforms calculate percentiles or handle missing data points create silent failures that take weeks to debug.
Integration complexity. You need to reconnect every data source. AWS CloudWatch. GCP monitoring. Azure metrics. Third-party APIs. Each one has its own authentication flow, its own quirks, its own failure modes.
Schema mismatches. Your old platform stored labels as env:production. Your new platform wants environment=production. You have 10,000 monitors. Good luck updating them all without breaking something.
Legacy cruft. Over the years, your team built hundreds of dashboards. Half of them are abandoned. A quarter reference metrics that don't exist anymore. But you don't know which ones are critical until someone pages you at 2am because the dashboard they rely on is blank.
So teams hire consultants. Datadog's partners offer "Custom Implementation & Migration Services" at $250-$400 per hour. A typical enterprise migration runs $50K to $200K in consulting fees alone. That's on top of the 6 months of internal engineering time.
Or teams just... don't migrate. They stay put. Even when they're unhappy. Even when there's a better option.
Grafana's documentation is admirably honest about this. Their migration guide states that the process "requires technical knowledge of Grafana's HTTP API and time-consuming manual processes." Even with their enterprise Datadog converter tool, customers still need to "make manual adjustments to dashboards and configurations."
New Relic's migration tutorial suggests users "try New Relic without needing to fully migrate your stack" — corporate speak for "this is going to hurt, so maybe just run both systems in parallel indefinitely."
The quiet part no one says: this isn't a technical limitation. It's a feature.
If migration is slow, expensive, and risky, customers won't leave. Even if your product is overpriced. Even if your volume-based pricing model forces them to sample their data or disable logs entirely. Even if they're paying $3 million a year for observability they could get for $300K.
The friction is the moat.
What We Built
The groundcover Migration Tool automates the entire migration process end-to-end. It's not a converter script or a professional services engagement. It's a feature of our product. Self-service, AI-powered, and included free for every groundcover customer.
Here's how it works.
Step 1: Automated Discovery
You connect your Datadog account via a read-only API key. Our system scans your entire environment: every dashboard, every monitor, every integration, every custom metric, every alert workflow. It builds a complete dependency graph of your observability stack.
This isn't just pulling JSON files. We're parsing widget configurations, identifying metric dependencies, mapping data sources, and understanding the relationships between dashboards and the alerts that reference them.
Step 2: Intelligent Mapping
The hard part of migration is metric mapping. Datadog uses its own naming conventions. groundcover uses standard Prometheus/OpenTelemetry conventions. The two don't align.
We built a mapping engine with 300+ predefined metric translations. When you're collecting kubernetes.cpu.usage in Datadog, we know you need container_cpu_usage_seconds_total in groundcover. When you're tracking trace.express.request.duration, we map it to our equivalent trace metric.
For custom metrics or unique schemas, you can define your own mappings. The system learns from your corrections and applies them across all dashboards and monitors.
Step 3: Pixel-Perfect Recreation
Once mapping is complete, we rebuild your dashboards automatically. Not "close enough" versions. Pixel-perfect recreations.
We rewrote our dashboard rendering engine specifically for this. We added support for floating layouts, sections, and data widgets to match Datadog's structure. Your three-column dashboard with a header section and grouped visualizations renders identically in groundcover.
If a widget type isn't fully supported, we flag it clearly. We preserve the layout and show you exactly what needs manual adjustment. No silent failures. No surprises three weeks later when someone notices a critical graph is missing.
Step 4: Side-by-Side Validation
Throughout the migration, you see your old Datadog dashboards next to the new groundcover versions. You can compare them query by query, panel by panel. You can validate that the data matches before you switch over.
This is the part that eliminates risk. You're not flying blind. You're not hoping you got it right. You're verifying everything works before you cut over.
Step 5: Self-Service Integration
The final piece is reconnecting your data sources. Traditionally, this means opening support tickets, waiting for vendor engineers, going back and forth on Slack for weeks.
We built a self-service integration center. You can connect AWS, GCP, and Azure directly through the UI. You configure scopes, tags, and scraping intervals yourself. You can use Terraform, API calls, or point-and-click and just whatever fits your workflow.
The system detects what integrations you had in Datadog and prompts you to set up the equivalents. It shows you what's missing and creates action items. If you need AWS CloudWatch metrics, it walks you through the IAM role setup. If you need GCP monitoring, it generates the service account config.
No support tickets. No waiting. No dependency on our team.
What We Had to Build to Make This Real
Shipping the Migration Tool required building significant new infrastructure.
300+ new metrics. We analyzed the most common Datadog metrics and added hundreds of new metrics to groundcover to ensure coverage. CPU throttling metrics. Network connection states. Disk I/O breakdowns. If you're monitoring it in Datadog, we made sure we collect it.
groundcoverQL query engine. We built a unified query language that works across logs, metrics, and traces. This lets us translate Datadog queries automatically without losing functionality.
New dashboard widgets. We added raw data widgets that display logs and traces directly in dashboards. We added code mode for advanced users who want to hand-tune queries beyond what the visual builder supports.
This wasn't a three-month side project. It was a major engineering investment. But it's core to our thesis: observability should be easy to adopt, not hard to leave.
Why We Built It This Way
We could have built a migration service instead of a migration feature.
Hire a team of solution engineers. Charge $100K per engagement. Make it a profit center. That's the industry standard.
But that would contradict everything groundcover stands for.
Our entire business model is based on eliminating the tradeoffs that legacy vendors force on customers. Volume-based pricing makes you choose between visibility and cost. We use node-based pricing so you don't have to choose. SaaS models make you send your data to vendor infrastructure. We use BYOC so your data stays in your VPC.
Migration friction is the same kind of forced tradeoff. It makes you choose between staying with an overpriced vendor or taking on months of risk and disruption.
We don't think you should have to choose.
So we built migration as a product feature. Fully automated. Self-service. Included at no extra cost for every groundcover customer.
And yes, our margins are still great. Because we're not paying to store our customers' data, we can afford to invest in tools that eliminate adoption friction instead of maximizing lock-in.
What This Means for the Market
Observability has been stuck in the same pattern for a decade. Vendors compete on features and UI. They add more dashboards, more integrations, more ML-powered anomaly detection. But they all share the same broken business model: charge more when customers use the product more.
That model creates perverse incentives. Your legacy observability vendor profits when you send more data. They have no incentive to help you reduce costs. They sell you "data management" tools to sample your logs or drop low-value metrics but those tools just help you pay slightly less while staying locked in.
The business model is the problem. And migration friction is what keeps it alive.
When switching vendors takes 6 months and $200K, customers tolerate bad pricing. They tolerate sampling. They tolerate turning off logs in production because the bill got too high. They tolerate the slow boil.
The Migration Tool eliminates that tolerance.
If you can validate a new observability platform in minutes instead of months, you don't have to tolerate overpriced vendors anymore. If you can migrate dashboards automatically instead of rebuilding them manually, the switching cost drops to nearly zero. If you can see side-by-side proof that your observability coverage is intact, the risk disappears.
Vendor lock-in stops working when the locks are gone.
I don't think this stays unique to groundcover for long. Eventually, someone will build migration tools for moving between any observability platforms. The industry will standardize. OpenTelemetry is already pushing toward common data formats. The next step is common migration paths.
But we're shipping first. And we're proving that you can build migration as a competitive advantage instead of a cost center.
What's Next
We are announcing the private preview of the groundcover Migration Tool today at KubeCon Atlanta and we'll be demoing it at our booth and taking early access signups.
General availability comes in December at AWS re:Invent.
We're starting with Datadog because that's where most of our inbound comes from. But the architecture is extensible and adding new source platforms is 10x easier now that the mapping and integration layers exist which makes adding migration to other legacy vendors easy.
This isn't just about winning deals we would have lost. It's about changing what's possible in observability. For ten years, migration has been a reason to stay with legacy vendors. Now it's a reason to leave.
If you're on Datadog today and paying $2 million a year for observability you could get for $300K, you have a real option now. The math works. The migration is automated. The risk is gone.
There's a different way to do this.
And nothing is stopping you anymore.
Private preview starts November 10. Sign up at https://www.groundcover.com/product/migration.
Sign up for Updates
Keep up with all things cloud-native observability.
We care about data. Check out our privacy policy.

.jpg)
.png)





