# AI Observability

{% hint style="info" %}
AI Observability requires an up-to-date sensor and backend. See [Installation & Updating](https://docs.groundcover.com/getting-started/installation-and-updating).
{% endhint %}

## Overview

AI Observability gives you full visibility into how your services use AI — models, cost, latency, prompts, agent behavior, and tool execution. All data stays in your infrastructure with [BYOC (Bring Your Own Cloud)](https://docs.groundcover.com/architecture/byoc); groundcover never processes your AI data outside your environment.

groundcover captures AI telemetry from two sources: **eBPF auto-detection** for immediate per-call visibility with zero code changes, and **OpenTelemetry instrumentation** for the full agent-level picture.

If your sensor is up to date and your services call a supported provider, you already have data. Open [AI Observability](https://app.groundcover.com/ai-observability) in your console to see what's there.

For instrumentation setup, privacy controls, and troubleshooting, see [Using AI Observability](https://docs.groundcover.com/use-groundcover/ai-observability).

***

## Two Levels of Visibility

eBPF captures every AI API call automatically — model, tokens, cost, latency, and full prompt/response content, with zero code changes. The sensor auto-detects **OpenAI**, **Anthropic**, and **AWS Bedrock** traffic from anything running in your monitored environment: production services, CI/CD pipelines, development pods, staging. If a process runs on a monitored node and makes an HTTPS call to a supported provider, groundcover captures it.

{% hint style="info" %}
For providers not yet auto-detected, use [SDK instrumentation](https://docs.groundcover.com/use-groundcover/ai-observability#sdk-instrumentation) to send GenAI traces directly. [Contact us](https://www.groundcover.com/contact) if you need a specific provider — eBPF support is extended based on customer requests.
{% endhint %}

eBPF gives you the calls. To see the full picture — which agent triggered which call, how tool results shaped the next prompt, how a multi-step workflow reasons from start to finish — add OpenTelemetry instrumentation. SDK spans give you trace trees, agent workflows, tool execution chains, and conversation threading: not just what your AI costs, but how it thinks.

When both are active, the same call may appear twice — once from eBPF and once from the SDK. Both are correct; cost is not double-counted. When groundcover detects an SDK span for the same call, eBPF cost and token data is excluded from aggregations.

See [Using AI Observability](https://docs.groundcover.com/use-groundcover/ai-observability) for setup instructions.

***

## Cost Tracking

groundcover calculates cost per AI span at ingestion time using a maintained model pricing table that updates as providers release new models. Each span includes input tokens, output tokens, and cached tokens.

By default, every AI span is stored — no sampling, no dropping. GenAI calls are high-value and low-volume compared to typical service traffic. Every call matters for cost analysis and debugging. If you need to disable GenAI storage entirely, see [Privacy Controls](https://docs.groundcover.com/use-groundcover/ai-observability/privacy-controls#disable-genai-data-collection).

Cost and token data is available for filtering and sorting in the span list — find your most expensive calls by model, service, or agent. See [Example Queries](https://docs.groundcover.com/use-groundcover/ai-observability/example-queries) for cost analysis patterns.
