Search
⌃K

Architecture

​groundcover is built the way we believe a modern APM should be built:
Fully distributed so it can scale efficiently.
Most data is processed as it travels through each node in the cluster, without being trucked out of the node or written to any storage.
Out-of-band so it will create minimal overhead.
Our agent is running in a separate pod, in a separate namespaces so it will have no impact on the monitored applications and can be governed using native K8s primitives like resource limits and network policies.
Private and secured.
groundcover stores the data it collects in-cluster, inside your environment. Our default deployment is built so no data ever leaves your cluster.
Compatible with common Open Source solutions.
groundcover stores the metrics it creates in a Prometheus compatible datasource. This datasource is deployed in-cluster inside your environment and under your control so they can be immediately added to your Grafana, enriching your current monitoring experience with highly intimate and granular insights about your applications and infrastructure.

Overview

​groundcover is built from these main components:
  • agent groundcover's data collection and aggregation agent. The agent is deployed as a DaemonSet called alligator, running a single pod in each node in the cluster.
  • backend groundcover's in-cluster data backend and its connector to the groundcover cloud. The groundcover backend contains Pods that manage the data aggregation and data stores for the logs, metrics, traces and events that are created by the groundcover agent and stored in-cloud.
  • cloud groundcover's control plane, taking care of routing the data from the groundcover backend to the user's browser. The groundcover cloud also handles and manages metadata around users, organizations etc.

Detailed Architecture

Here's a detailed description of groundcover's unique in-cloud architecture:
groundcover's in-cloud architecture
When deploying groundcover you will see the following components running:
  • alligator(DaemonSet) alligator is groundcover's agent. It is responsible for loading the eBPF program used to collect the data, aggregating the data (like creating high cardinality metrics on-the-fly) and capturing raw data (like API traces) based on its internal logic.
  • shepherd(ReplicaSet) shepherd is responsible for the smart aggregation of all metrics reported from the the different alligators in the cluster. shepherd controls the final metric cardinality before it's deposited into the time-series database and other mechanisms like series aging, report interval etc.
  • k8s-watcher(ReplicaSet) k8s-watcher is responsible for gathering and clustering all Kubernetes metadata in the current cluster.
  • loki(StatefulSet) loki is a logs database (Grafana Loki), used for the persistent storage of all collected logs. loki supports LogQL as its log query language.
  • tsdb(StatefulSet) timescale (TimescaleDB) is a PostgreSQL-based database, used for the persistent storage of all collected traces and events.
  • victoria-metrics(StatefulSet) VictoriaMetrics (VictoriaMetrics) is a time-series database, used for the persistent storage of all collected metrics. VictoriaMetrics supports a PromQL query interface, so it can be treated as a Prometheus compatible datasource.
  • victoria-metrics-agent(ReplicaSet) vmagent is a tiny agent which helps collect metrics from various sources. It is used for collecting groundcover's internal metrics.
  • portal(ReplicaSet) portal is responsible for connecting the customer's cluster to groundcover's cloud backend, and allow permitted users to access their in-cloud data.
Have any more questions? join our Slack support channel.