LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
    • groundcover MCP
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Why It Matters
  • How It Works
  • Log - Trace Correlation in Practice
  • Prerequisites
  • Valid trace_id Fields in Logs
Export as PDF

Log and Trace Correlation

Last updated 20 days ago

groundcover enables seamless correlation between logs and traces, giving you deep observability across your applications and infrastructure. This powerful link lets you jump from a trace to the exact logs emitted during its execution—or from a log line back to the root trace context—so you can understand issues faster and troubleshoot more effectively.

Why It Matters

Correlating logs and traces enables:

  • Faster root cause analysis: Quickly identify where and why something went wrong.

  • Streamlined debugging: No need to align timestamps or track request paths manually.

  • Actionable alerts: Jump directly from an alert to relevant logs and traces in a single click.

How It Works

groundcover does not automatically inject trace context into logs. It relies on your application to include the trace information—most commonly the trace_id—as part of your log payload.

Once logs and traces are ingested, groundcover correlates them using the shared trace_id field. This allows you to: • View logs emitted during a specific trace execution. • Navigate from a log line to its originating trace, assuming a shared trace context exists.

Log - Trace Correlation in Practice

When viewing a log that contains a trace_id (see screenshot), you can enable correlation by toggling "Correlation by Trace ID"

If a matching trace is found, the relevant spans will be displayed.

Keep in mind that trace sampling can impact correlation. If a particular trace wasn’t sampled and therefore not ingested into groundcover, it won’t be available for correlation—even if the log contains the correct trace_id.

To dive deeper into a trace, click “View in Traces” to open the Waterfall view on the Traces page.

From the trace view, you can also jump back to the relevant logs by switching to the “Logs” tab.

Prerequisites

  1. Instrument your services with OpenTelemetry or Datadog SDK to generate distributed traces.

  2. Configure your logging framework to extract the current trace context and include it in each log line.

  3. Prefer structured logging formats like JSON for clean parsing and visualization.

  4. Standardize field names like trace_id and span_id across services to simplify correlation. See below for the valid field names you can use to include the trace_id in your logs.

Valid trace_id Fields in Logs

You can use the following suffixes for your trace_id field names: trace_id, trace-id, traceid, trace.id .

This means any field name ending with one of these suffixes—such as x_trace_id or my-traceid will be recognized as a valid trace_id field in your logs.

Field names are case-insensitive, so something like CaPsLock-Trace.iD is also considered valid.

Log - Trace Correlation in groundcover
Waterfall view of a trace correlated from a log