LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
    • groundcover MCP
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Overview
  • Transforming Data with OTTL
  • Required Attributes
  • Deploying OTTL in groundcover
  • Setting Conditions
  • Writing OTTL Statements
  • Examples
  • Simple GROK Pattern Extraction
  • Grok + Replace + ParseKeyValue
  • Grok + ToLowerCase + ParseJSON
Export as PDF

Log Parsing with OpenTelemetry Pipelines

Configure custom log transformations in groundcover using OpenTelemetry Transformation Language (OTTL). Tailor your logs with structured pipelines for parsing, filtering, and enriching data before ing

Last updated 9 days ago

Overview

groundcover supports the configuration of log pipelines using OpenTelemetry Transformation Language (OTTL) to process and customize your logs. With OTTL, you gain full flexibility to transform data as it flows into the platform.

Transforming Data with OTTL

groundcover uses OTTL to enrich and shape log data inside your monitored environments. OTTL pipelines give you a structured way to parse, filter, and modify logs before ingestion.

Each pipeline is made up of transformation steps—each step defines a specific operation (like parsing JSON, extracting key-value pairs, or modifying attributes). You can configure these transformations directly in your groundcover deployment.

To test your logic before going live, we recommend using our Parsing Playground (click the top right corner when viewing a specific log).

Required Attributes

To define an OTTL pipeline, make sure to include the following fields:

  • statements – List of transformations to apply.

  • conditions – Logic for when the rule should trigger.

  • statementsErrorMode – How to handle errors (e.g., skip, fail).

  • conditionLogicOperator – Used when you define multiple conditions.

Deploying OTTL in groundcover

Rules are defined as a list of steps which are executed one after another. Configuration works by specifying the list of rules in your configuration files. Just define your rules, add them to your values file, redeploy, and you're done.

Each rule must have a unique ruleName

Example Structure

ottlRules:
  - ruleName: "rule1"
    conditions:
      - 'workload == "service1" or workload == "service2"'
    statements:
      - statement1
      - statement2
  - ruleName: "rule2"
    conditions:
      - 'level == "debug" or container_name == "test"'
    statements:
      - statement1
      - statement2
logs:
  client:
    ottlRules:
    - ruleName: "rule1"
      conditions:
        - 'workload == "service1" or workload == "service2"'
      statements:
        - statement1
        - statement2
    - ruleName: "rule2"
      conditions:
        - 'level == "debug" or container_name == "test"'
      statements:
        - statement1
        - statement2

Setting Conditions

Use conditions to apply transformations only when specific attributes match. This ensures your pipeline runs efficiently and only on relevant logs.

Common fields you can use:

  • workload – Name of the service or app.

  • container_name – Container where the log originated.

  • level – Log severity (e.g., info, error).

  • format – Log format (e.g., JSON, CLF, unknown).

Writing OTTL Statements

Some commonly used functions in groundcover:

  • ExtractGrokPatterns

  • ParseJSON

  • Replace_pattern

  • Delete_key

  • ToLowerCase

  • Concat

  • ParseKeyValue

Examples

Simple GROK Pattern Extraction

Log

{
  "body": "2025-03-23 10:30:45 INFO User login attempt from 192.168.1.100"
}

Statements

- 'set(cache, ExtractGrokPatterns(body, "^%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:level}%{SPACE}User login attempt from %{IP:source_ip}"))'
- 'merge_maps(attributes, cache, "insert")'

Results

{
  "timestamp": "2025-03-23 10:30:45",
  "level": "INFO",
  "source_ip": "192.168.1.100"
}

Grok + Replace + ParseKeyValue

Log

{
  "body": "2025-03-23 15:20:12,512 - EventProcessor - DEBUG - Completed event processing [analyzer_name=disk-space-check] [node_id=7f5e9aa8412d4c0003a7b2c5] [service_id=813dd10298f77700029d54e3] [sensor_id=3] [tracking_code=19fd5b6e72c7e94088a9ff3d] [log_id=b'67acfe0c92d43000'] [instance_id=microservice-7894563210]"
}

Statements

- 'set(cache, ExtractGrokPatterns(body, "^%{TIMESTAMP_ISO8601:timestamp}%{SPACE}-%{SPACE}%{NOTSPACE}%{SPACE}%{NOTSPACE}%{SPACE}%{LOGLEVEL:level}%{DATA}(?<kv>\\[%{GREEDYDATA})"))'
- 'replace_pattern(cache["kv"], "[\\[\\]]", "")'
- 'merge_maps(attributes, ParseKeyValue(cache["kv"]), "insert")'
- 'set(attributes["timestamp"], cache["timestamp"])'

Results

{
  "instance_id": "microservice-7894563210",
  "analyzer_name": "disk-space-check",
  "node_id": "7f5e9aa8412d4c0003a7b2c5",
  "service_id": "813dd10298f77700029d54e3",
  "sensor_id": "3",
  "tracking_code": "19fd5b6e72c7e94088a9ff3d",
  "log_id": "b67acfe0c92d43000",
  "timestamp": "2025-03-23 15:20:12,512"
}

Grok + ToLowerCase + ParseJSON

Log

{
  "body": "2025-03-23 14:55:12,456 ERROR {\"event\":\"user_login\",\"user_id\":12345,\"status\":\"failed\",\"ip\":\"192.168.1.10\"}"
}

Statements

- 'set(cache, ExtractGrokPatterns(body, "^%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:level}%{SPACE}(?<json_body>\\{.*\\})"))'
- 'set(attributes["timestamp"], cache["timestamp"])'
- 'set("level", ToLowerCase(cache["level"]))'
- 'set(cache["parsed_json"], ParseJSON(cache["json_body"]))'
- 'merge_maps(attributes, cache["parsed_json"], "insert")'

Results

{
  "timestamp": "2025-03-23 14:55:12,456",
  "level": "error",
  "event": "user_login",
  "user_id": 12345,
  "status": "failed",
  "ip": "192.168.1.10"
}