Log Management

Stream, store, and query your logs at any scale, for a fixed cost.

Overview

Our Log Management solution is built for high scale and fast query performance so you can analyze logs quickly and effectively from all your cloud environments.

Gain context - Each log data is enriched with actionable context and correlated with relevant metrics and traces in one single view so you can find what you’re looking for and troubleshoot, faster.

Centralize to maximize - The groundcover platform can act as a limitless, centralized log management hub. Your subscription costs are completely unaffected by the amount of logs you choose to store or query. It's entirely up to you to decide.

Collection

Seamless log collection with Alligator

groundcover ensures a seamless log collection experience with our proprietary eBPF sensor, Alligator, which automatically collects and aggregates all logs in all formats - including JSON, plain text, NGINX logs, and more. All this without any configuration needed.

This sensor is deployed as a DaemonSet, running a single pod on each node within your Kubernetes cluster. This configuration enables the groundcover platform to automatically collect logs from all of your pods, across all namespaces in your cluster. This means that once you've installed groundcover, no further action is needed on your part for log collection. The logs collected by each Alligator instance are then channeled to the OTel Collector.

OTel Collector: A vendor-agnostic way to receive, process and export telemetry data.

Acting as the central processing hub, the OTel Collector is a vendor-agnostic tool that receives logs from various Alligator pods. It processes, enriches, and forwards the data into groundcover's ClickHouse database, where all log data from your cluster is securely stored.

Logs Attributes

Logs Attributes enable advanced filtering capabilities and is currently supported for the formats:

  • JSON

  • Common Log Format (CLF) - like those from NGNIX and Kong

  • logfmt

groundcover automatically detects the format of these logs, extracting key:value pairs from the original log records as Attributes.

Each attribute can be added to your filters and search queries.

Example: filtering a log in a supported format with a field of a request path "/status" will look as follows: @request.path:"/status". Syntax can be found here.

Configuration

groundcover offers the flexibility to craft tailored collection filtering rules, you can choose to set up filters and collect only the logs that are essential for your analysis, avoiding unnecessary data noise. For guidance on configuring your filters, explore our Customize Logs Collection section.

You also have the option to define the retention period for your logs in the ClickHouse database. By default, logs are retained for 3 days. To adjust this period to your preferences, visit our Customize Retention section for instructions.

Log Explorer

Once logs are collected and ingested, they are available within the groundcover platform in the Log Explorer, which is designed for quick searches and seamless exploration of your logs data. Using the Log Explorer you can troubleshoot and explore your logs with advanced search capabilities and filters, all within a clear and fast interface.

Search and filter

The Log Explorer integrates dynamic filters and a versatile search functionality that enables you to quickly and easily identify the right data. You can filter out logs by selecting one or multiple criteria, including log-level, workload, namespace and more, and can limit your search to a specific time range.

Learn more about how to use our search syntaxes

Log Pipelines

groundcover natively supports setting up log pipelines using Vector transforms. This allow for full flexibility in the processing and manipulation of logs being collected - parsing additional patterns by regex, renaming attributes, and many more.

Learn more about how to configure log pipelines

Last updated