LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
    • groundcover MCP
      • Configure groundcover's MCP Server
      • Getting-started Prompts
      • Real-world Use Cases
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
      • Send groundcover Alerts to Email via Zapier
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Finding the groundcover sensor service endpoint
  • Dual Shipping from the DataDog agent
  • Redirecting the SDKs traces endpoint
  • Sampling Incoming Traces
Export as PDF
  1. Integrations
  2. Data sources
  3. DataDog

Traces

Last updated 15 days ago

groundcover is fully able to ingest traces generated by DatadDog APM, displaying it natively in our platform. The result is a seamless experience of combining eBPF and DataDog traces to enable even more insights into your applications.

Once ingested, DataDog traces will natively appear as Distributed Traces in the platform.

There are two ways to ingest DataDog traces into groundcover:

  1. Dual Shipping from the DataDog agent - to DataDog + groundcover's endpoint

  2. Redirecting the DataDog SDK to send traces to groundcover's endpoint

Finding the groundcover sensor service endpoint

Dual Shipping from the DataDog agent

This method will send traces both to DataDog and to groundcover, and relies on having a running DataDog agent.

It's mostly useful when you wish to see how DataDog traces look in groundcover, while still sending them to DataDog as well.

Configuring the DataDog agent for Dual Shipping is done using the following environment variable. Note that the "apikey" should be left as is, as the ingestion does not require any type of apikey.

datadog:  
  env:
    - name: "DD_APM_ADDITIONAL_ENDPOINTS"
      value: "{\"http://{GROUNDCOVER_SENSOR_ENDPOINT}:8126\": [\"apikey\"]}"
DD_APM_ADDITIONAL_ENDPOINTS='{\"http://{GROUNDCOVER_ALLIGATOR_ENDPOINT}:8126\": [\"apikey\"]}'

Redirecting the SDKs traces endpoint

This method redirects the DataDog SDKs to send traces to groundcover directly, without requiring a running DataDog agent.

It allows taking advantage of existing instrumentation without the need for maintaining a running DataDog agent.

Apply the following environment variables to your deployment to redirect the traffic to groundcover's endpoint for ingestion:

env:
  - name: DD_TRACE_AGENT_URL
    value: "http://{GROUNDCOVER_SENSOR_ENDPOINT}:8126"

Keep in mind - using this method will stop sending traces to the DataDog agent, meaning they will not appear in DataDog's platform

Sampling Incoming Traces

As of December 1st, 2024, the default sampling rate is 5%. See below on how to control this value.

If sampling is not done by the DataDog SDK, it can be convenient to sample a ratio of the incoming traces in groundcover.

groundcover sampling does not take into account sampling being done in earlier stages (e.g SDK or collectors). It's recommended to choose one point for sampling.

To configure sampling, the relevant values can be used:

agent:
  sensor:
    apmIngestor:
      dataDog:
        samplingRatio: 0.05

The samplingRatio field is a fraction in the range 0-1. For example, 0.1 means 10% of the incoming traces will be sampled and stored in groundcover.

Configuring 100% Sampling Ratio (No Sampling)

Use the values below to disable sampling and ingest 100% of the incoming traces.

agent:
  sensor:
    apmIngestor:
      dataDog:
        samplingRatio: 1

Both methods - Dual Shipping and SDK redirection - require locating the groundcover endpoint to ship the traces to. Use the instructions to locate the endpoint for the sensor service, referenced below as {GROUNDCOVER_SENSOR_ENDPOINT}.

here