LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Overview
  • How It Works
  • Requirements
  • Example
  • Viewing Patterns
  • Value Distribution
  • Investigating Patterns
Export as PDF
  1. Use groundcover

Log Patterns

Log Patterns help you cut through log noise by grouping similar logs based on structure. Instead of digging through thousands of raw lines, you get a clean, high-level view of what’s actually going on

Last updated 7 days ago

Overview

Log Patterns in groundcover help you make sense of massive log volumes by grouping logs with similar structure. Instead of showing every log line, the platform automatically extracts the static skeleton and replace dynamic values like timestamps, user IDs, or error codes with smart tokens.

This lets you:

  • Cut through the noise

  • Spot recurring behaviors

  • Investigate anomalies faster

How It Works

groundcover automatically detects variable parts of each log line and replace them with placeholders to surface the repeating structure.

Placeholder
Description
Example

<TS>

Timestamp

2025-03-31T17:00:00Z

<N>

Number

404, 123

<IP4>

IPv4 Address

192.168.0.1

<*>

Wildcard (text, path, etc.)

/api/v1/users/42

<V>

Version

v0.32.0

<TM>

Time measure

5.5ms

Requirements

Log Patterns are collected directly on the sensor.

To see patterns in your environment, make sure your groundcover sensor is upgraded to version 1.9.251 or later.

Example

Raw log:

192.168.0.1 - - [30/Mar/2025:12:00:01 +0000] "GET /api/v1/users/123 HTTP/1.1" 200

Patterned:

<IP4> - - [<TS>] "<*> HTTP/<N>.<N>" <N>

Viewing Patterns

  1. Go to the Logs section.

  2. Switch from Records to Patterns using the toggle at the top.

  3. Patterns are grouped and sorted by frequency. You’ll see:

    • Log level (Error, Info, etc.)

    • Count and percentage of total logs

    • Pattern’s trend over time

    • Workload origin

    • The structured pattern itself

Value Distribution

You can hover over any tag in a pattern to preview the distribution of values for that specific token. This feature provides a breakdown of sample values and their approximate frequency, based on sampled log data.

This is especially useful when investigating common IPs, error codes, user identifiers, or other dynamic fields, helping you understand which values dominate or stand out without drilling into individual logs.

For example, hovering over an <IP4> token will show a tooltip listing the most common IP addresses and their respective counts and percentages.

Investigating Patterns

  • Click a pattern: Filters the Logs view to only show matching entries.

  • Use filters: Narrow things down by workload, level, format, or custom fields.

  • Suppress patterns: Hide noisy templates like health checks to stay focused on what matters.

  • Export patterns: Use the three-dot menu to copy the pattern for further analysis or alert creation.