LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
    • groundcover MCP
      • Configure groundcover's MCP Server
      • Getting-started Prompts
      • Real-world Use Cases
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
      • Send groundcover Alerts to Email via Zapier
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Customize data retention by specific log, trace, and K8s event criteria
  • Consolidated view of all your data across multiple clusters
  • Custom environment labels
  • Wildcard support and an optimized engine for logs search
Export as PDF
  1. Product Updates
  2. Earlier updates
  3. 2024

May 2024

Last updated 10 months ago

Customize data retention by specific log, trace, and K8s event criteria

Release Date: May 30, 2024

Affected Sections: Logs, Traces, K8s Events

Today we introduce the ability to customize your data retention policies for logs, traces, and Kubernetes events, based on specific properties.

This means that two datasets of the same kind can now also have two different retention periods. As an example, you can choose to retain error logs from cluster X for 30 days, while retaining non-error logs for the cluster for only 7 days.

This new capability offers complete flexibility and enhanced control over the management of your monitoring data.

Key Benefits

Cost optimization - gain total control over your data retention policies, so you can change the retention periods of specific data anytime to reduce the cost of your data storage.

Compliance requirements - align your data retention periods to your industry's requirements and/or organization's internal policy.

To learn more, check out our .

These new capabilities are optimized for v1.7.98 or later. Learn how to groundcover to the most recent version.

Consolidated view of all your data across multiple clusters

Release Date: May 27, 2024

Affected Sections: All

We are thrilled to announce a significant enhancement to our observability platform: the ability to select multiple clusters and investigate all their data in a single, consolidated view. This new capability allows you to simultaneously monitor as much of your environment as you choose, providing a more comprehensive picture and saving considerable time.

Key Benefits:

  1. Unified, holistic monitoring experience for better troubleshooting: Quickly identify and resolve issues that appear in multiple clusters, reducing resolution times (MTTR) and ensuring smoother operations.

  2. Enhanced data analysis and insight correlations: Correlate data across clusters to bring up patterns, dependencies, and performance bottlenecks that might not be as easily noticeable when monitoring each cluster individually.

Use cases:

  • Side-by-side comparison of DEV, STG, and PROD environments: Understand if an issue is present in one or more of your development phases by viewing your development, staging, and production environments all in one view.

  • Regional issue identification: Save time by quickly identifying issues that show up in multiple regions by monitoring regional clusters simultaneously.

Custom environment labels

In addition to selecting individual cluster names, you can now also create custom environment labels. This helps you group together multiple clusters that share the same purpose or have any common denominator.

As an example, you could add the custom label "DEV" to all clusters that make up your dev environments.

Once custom environment labels are added, you will see them as part of the drop-down menu. Selecting a label will show data for all the clusters that share that label.

Wildcard support and an optimized engine for logs search

Release Date: May 22, 2024

Affected Sections: Logs

Major enhancements to our Logs search engine which will improve your experience and shorten your investigation time!

Faster Searches

Our enhanced logs search engine now offers faster search capabilities. We have optimized the search process, enabling quicker and more efficient retrieval of log data. This means you can now pinpoint the information you need in a fraction of the time, enhancing your productivity and streamlining your workflow.

Wildcard Support

This capability is available only to Enterprise users. Learn more about our .

We've introduced wildcard support in our search engine, making it easier than ever to find partial matches. This feature enables more flexibility in your searches, helping you locate specific log entries even when you only have partial information. Whether you're troubleshooting an issue or analyzing patterns, wildcard queries provide a powerful tool to enhance your search capabilities. Read more about how to apply wildcards to your logs searches .

Stay tuned for more exciting updates, and as always, we invite you to share your thoughts with us in our !

to start using these new and improved logs search engine capabilities.

These new capabilities are optimized for v1.7.84 or later. Learn how to groundcover to the most recent version.

Learn how to add custom environment labels
paid plans
here
Slack community
Log in
full details and configuration instructions
update
update