Log Management

Stream, store, and query your logs at any scale, for a fixed cost.


Our Log Management solution is built for high scale and fast query performance so you can analyze logs quickly and effectively from all your cloud environments.

Gain context - Each log data is enriched with actionable context and correlated with relevant metrics and traces in one single view so you can find what you’re looking for and troubleshoot, faster.

Centralize to maximize - The groundcover platform can act as a limitless, centralized log management hub. Your subscription costs are completely unaffected by the amount of logs you choose to store or query. It's entirely up to you to decide.


Seamless log collection with Alligator

groundcover ensures a seamless log collection experience with our proprietary eBPF sensor, Alligator, which automatically collects and aggregates all logs in all formats - including JSON, plain text, NGINX logs, and more. All this without any configuration needed.

This sensor is deployed as a DaemonSet, running a single pod on each node within your Kubernetes cluster. This configuration enables the groundcover platform to automatically collect logs from all of your pods, across all namespaces in your cluster. This means that once you've installed groundcover, no further action is needed on your part for log collection. The logs collected by each Alligator instance are then channeled to the OTel Collector.

OTel Collector: A vendor-agnostic way to receive, process and export telemetry data.

Acting as the central processing hub, the OTel Collector is a vendor-agnostic tool that receives logs from various Alligator pods. It processes, enriches, and forwards the data into groundcover's ClickHouse database, where all log data from your cluster is securely stored.

Logs Attributes

Logs Attributes enable advanced filtering capabilities and is currently supported for the formats:

  • JSON

  • Common Log Format (CLF) - like those from NGNIX and Kong

  • logfmt

groundcover automatically detects the format of these logs, extracting key:value pairs from the original log records as Attributes.

Each attribute can be added to your filters and search queries.

Example: filtering a log in a supported format with a field of a request path "/status" will look as follows: @request.path:"/status". Syntax can be found here.


groundcover offers the flexibility to craft tailored collection filtering rules, you can choose to set up filters and collect only the logs that are essential for your analysis, avoiding unnecessary data noise. For guidance on configuring your filters, explore our Customize Logs Collection section.

You also have the option to define the retention period for your logs in the ClickHouse database. By default, logs are retained for 3 days. To adjust this period to your preferences, visit our Customize Retention section for instructions.

Log Explorer

Once logs are collected and ingested, they are available within the groundcover platform in the Log Explorer, which is designed for quick searches and seamless exploration of your logs data. Using the Log Explorer you can troubleshoot and explore your logs with advanced search capabilities and filters, all within a clear and fast interface.

Search and filter

The Log Explorer integrates a dynamic filters panel and a versatile search bar to enable you to most easily find the required data. On the left, the filters panel allows you to filter-out logs by selecting specific criteria, including log-level, workload, namespace and more. The filters work in tandem with the search functionality that supports both key:value pairs as well as free text search for comprehensive log exploration. You can apply the desired time range is based on use of the time range picker on the right hand corner of the Log explorer.

To search for logs use the following syntax in the search bar:



Filters: Use golden filters to narrow down your search. Note - For a single key, multiple filters act as an 'OR' condition, whereas different keys are combined with an 'AND' condition.



Exclude: By adding a - in front of any condition it will apply exclusion logic [filter out] as opposed to inclusion logic



Log Attribute: Include or exclude logs containing attributes for granular searches. Note - For a single key, multiple filters act with an 'OR' condition, whereas different keys are combined with an 'AND' condition.

@transaction.id:123 -@owner:main


Single term: single word such as groundcover or POST. In this case, the exact term is matched. For instance, searching for ground will not match logs like: groundcover sensor is up. You can utilize a single term search with a wildcard, for example: ground* .

GET ground*


Wildcard: Search for partial matches by adding wildcards to single terms.


"phrase search"

Phrase Search: Enclose terms within double quotes to find logs containing the exact phrase. (case-insensitive) Note that wildcards are not enabled in phrase search, If you use wildcards in a phrase search, they will be treated as asterisks.

"Execution started"

-Key:Value -single_term -"free text"

Exclusion: Specify terms or filters to omit from logs events; applies to each distinct text search.

-namespace:demo -"failed to fetch"

Last updated