Shahar Azulay
Shahar Azulay
December 1, 2025
December 1, 2025
X
min read

Over the last two years I have had the same conversation with every engineering leader building with LLMs. The energy is incredible. Teams launch agentic workflows, stretch context windows, connect new tools, ship features at impossible speed, and push AI straight into production paths.

Then the same moment arrives for everyone. Features slow down. Workflows break in the middle. Token costs spike without warning. Outputs become inconsistent. Performance becomes unpredictable.

And when that moment hits, the first question is always the same: what’s actually going on under the hood?

Until now, the answer has been: nobody really knows.

Today, that changes.

I’m excited to announce that the groundcover LLM Observability feature now supports Amazon Bedrock, giving engineering and platform teams the same real-time visibility we already deliver for OpenAI and Anthropic. And just like everything we build, it requires no SDKs, no code changes, no instrumentation, and runs fully inside your environment through our BYOC model. Furthermore, groundcover supports Bedrock AgentCore with the use of our Otel support. 

This is the level of transparency AI systems should have had from day one.

Why Bedrock Observability Matters Right Now

One of the strengths of Amazon Bedrock is the way it treats enterprise data. When you tune a model, AWS creates a dedicated copy that never touches the shared foundation model. With PrivateLink, traffic stays inside your VPC. The responsibility model is clean. Data ownership is clear. Nothing leaks outside of your boundary.

That philosophy matches everything we built groundcover to deliver.

If teams want real observability for LLMs, the data must stay where the AI workloads run. That is exactly what our BYOC architecture guarantees. Telemetry never leaves your cloud. Sensitive prompts, responses, parameters, and tool input stay in your account. You control masking, retention, and access under your own policies.

But keeping data inside your boundary is not enough. Teams need complete visibility without asking developers to rewrite or wrap anything. This is where eBPF becomes essential. Running at the kernel layer gives us a direct view of every Bedrock interaction in real time. There are no agents in your code, no SDK drift, and no blind spots when services or pipelines evolve.

Bedrock gives you ownership of your data. BYOC gives you ownership of your visibility. eBPF gives you full coverage with zero friction. Together they create an architecture that supports AI in production with confidence, not guesswork.

AgentCore: Seeing the Logic Behind the Workflow

AgentCore isn’t just another way to call a model. It’s the layer where AI behaves like a system,  evaluating branches, invoking tools, carrying state, and making decisions that impact downstream services. That shift creates a different kind of visibility problem. It’s no longer about the payload going into a model. It’s about understanding the reasoning path the agent followed.

That’s where groundcover extends the picture.

Through our OpenTelemetry-based support, we capture the structure of an AgentCore workflow as it executes. You see the sequence of steps, the tools invoked, the branches taken or skipped, and where the workflow paused, retried, or failed. Instead of trying to interpret a log trail, you get a clear timeline that reflects the actual logic the agent applied.

Bedrock gives teams a secure, scalable way to run agents. groundcover shows how those agents behave in practice, so you can operate them with confidence, not assumptions.

Where We’re Headed

LLM applications are shifting from single prompts into dynamic, agentic systems. They integrate with tools, data stores, product surfaces, and business logic. As these systems gain more autonomy, the need for deep visibility becomes critical. Teams cannot operate AI in production without understanding how it behaves, why it behaves that way, and what it costs.

Supporting Bedrock and AgentCore is another important milestone, proving that zero-instrumentation observability is the only architecture that truly supports modern AI. Our approach, BYOC for unbreakable security and data residency, and eBPF for 100% coverage with zero friction, brings us one step closer to the future of transparent AI systems. We are already working on more introspection, automation, and intelligence across all providers and workflows. The future of AI is not only stronger models, but full visibility into the systems built around them.

See it Live at AWS re:Invent

If you want to see Bedrock and AgentCore observability in action, join me at AWS re:Invent for my talk: “Tracing the Untraceable: Full-Stack Observability for LLMs and Agents” and visit us at booth 1732.

The black box era of AI is ending. And I’m excited to show you what’s next.

I need to write an announcement blog from Shahar Azulay, CEO and Co-Founder of groundcover. The announcement is the addition of Bedrock support in the LLM Observability capability. You can use BYOC and eBPF for this. There is also support for Bedrock AgentCore that is handled only via the otel support.

I will provide: the press release, Orr Benjamins blog and  some snippets from his Linkedin to see how he writes and what he said about LLM Observability in the past

That's an exciting one! X-ray vision into your OpenAI and Anthropic APIs 🔎

Every company on the planet uses LLM, most of them already in their production workloads.

When you run so fast to get value, you're leaving stuff behind. Observability being one of them.

Now you can monitoring your LLM workflows in a minute with groundcover and our eBPF sensor.

Why is my new LLM feature so slow!? 🥵

Everybody's running to utilize LLMs in their production applications. It's usually a multi-turn workflow, in front of a third part vendor like OpenAI, with huge context windows being trasferred from side to side.

What can go wrong? Everything.

It can take 10 seconds to return value to the user.

It can break in the middle.

You can get a wild hallucination back.

You can find out at the end of the month that cool new feature just cost you $1M worth of tokens...

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.