Scraping Metrics in Kubernetes

Automatically scrape metrics from pods and other services

Scraping Pods using Prometheus Annotations

groundcover supports standard Prometheus annotations instructing scrapers to peridically fetch metrics from pods.

Our sensors will automatically discover every pod with those annotations and periodically scrape the metrics path specified

Each sensor scrapes metrics on its own node - reducing load and cross-AZ traffic

Annotating Your Pods

Add the following annotations to your Kubernetes pods exposing Prometheus metrics

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "<port>"
  prometheus.io/path: "<metrics-path>" # Optional, defaults to /metrics

Enable Custom Metrics Scraping

custom-metrics:
  enabled: true

Scraping Prometheus CRDs

grouncover supports scraping Prometheus metrics based on CRDs like PodMonitor and ServiceMonitor. This is achieved using the VictoriaMetrics Operator.

Enable VictoriaMetrics Operator

Deploy the victoria-metrics-operator before enabling the built-in vmagent.

Step 1 - Enable only the operator:

victoria-metrics-operator:
  enabled: true
# Do not enable builtinVMAgent at this stage.

Step 2 - Enable builtinVMAgent with a second deployment:

victoria-metrics-operator:
  enabled: true
  builtinVMAgent:
    enabled: true
    # By default, all ServiceMonitor, PodMonitor, PrometheusRule, and Probe CRDs
    # in all namespaces are scraped automatically.
    # To limit scope, override selectors as documented:
    # https://docs.victoriametrics.com/operator/resources/vmagent/#scraping
    # spec:
    #   selectAllByDefault: true
    #   serviceScrapeNamespaceSelector: {}
    #   podScrapeNamespaceSelector: {}
    #   podScrapeSelector: {}
    #   serviceScrapeSelector: {}
    #   nodeScrapeSelector: {}
    #   nodeScrapeNamespaceSelector: {}
    #   staticScrapeSelector: {}
    #   staticScrapeNamespaceSelector: {}

ArgoCD Integration

If deploying monitoring CRDs via ArgoCD, add this override to prevent sync conflicts:

victoria-metrics-operator:
  operator:
    prometheus_converter_add_argocd_ignore_annotations: true

This instructs the operator to add ArgoCD ignore annotations for Prometheus CRDs.

Deploy Monitoring CRDs

The operator automatically discovers and scrapes any deployed Prometheus CRDs (ServiceMonitor, PodMonitor, PrometheusRule, Probe).

Example - PodMonitor

Create my-test-podmonitor.yaml:

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: scrape-demo
spec:
  selector:
    matchLabels:
      # <your app labels>
  podMetricsEndpoints:
    - port: <metrics port>
  namespaceSelector:
    matchNames:
      - <namespace>

Apply it:

kubectl apply -n groundcover -f my-test-podmonitor.yaml

The vmagent will automatically reload and begin scraping the new target. Metrics should appear in groundcover’s Grafana dashboard soon after.

VictoriaMetrics Operator also supports its own CRDs; details are available here.

Setting up Additional Scrape Jobs

groundcover supports setting up additional Prometheus metric scrapingvia a built in vmagent component (VictoriaMetrics). vmagent supports standard Prometheus scrape job configs.

The jobs are defined as an array under the custom-metrics section of the helm chart.

custom-metrics:
  enabled: true
  extraScrapeConfigs: []

Example - Scraping cadvisor metrics

By default, only key metrics are collected to limit metric cardinality. Edit the regex to include additional metrics as needed

custom-metrics:
  enabled: true
  extraScrapeConfigs:
    - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      job_name: kubernetes-nodes-cadvisor
      scrape_interval: 1m
      kubernetes_sd_configs:
        - role: node
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - replacement: kubernetes.default.svc:443
          target_label: __address__
        - regex: (.+)
          replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
          source_labels:
            - __meta_kubernetes_node_name
          target_label: __metrics_path__
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      metric_relabel_configs:
      - source_labels: [__name__]
        action: keep
        regex: '(container_cpu_usage_seconds_total|container_memory_working_set_bytes)'

Example - Scraping Full KSM Metrics

By default, groundcover exports a curated set of kube-state-metrics (KSM) metrics that power native dashboards and screens. These metrics are prefixed with groundcover_ to differentiate them from other KSM deployments.

To collect the complete set of kube-state-metrics (KSM) metrics in groundcover, follow these steps:

  1. Enable all KSM collectors in the ksm deployment.

  2. Enable custom metrics scraping.

  3. Add a scrape job for the KSM metrics.

kube-state-metrics:
  collectors:
    - certificatesigningrequests
    - configmaps
    - cronjobs
    - daemonsets
    - deployments
    - endpoints
    - horizontalpodautoscalers
    - ingresses
    - jobs
    - leases
    - limitranges
    - mutatingwebhookconfigurations
    - namespaces
    - networkpolicies
    - nodes
    - persistentvolumeclaims
    - persistentvolumes
    - poddisruptionbudgets
    - pods
    - replicasets
    - replicationcontrollers
    - resourcequotas
    - secrets
    - services
    - statefulsets
    - storageclasses
    - validatingwebhookconfigurations
    - volumeattachments

custom-metrics:
  enabled: true
  extraScrapeConfigs:
    - job_name: groundcover-kube-state-metrics
      honor_timestamps: true
      honor_labels: true
      scrape_interval: 1m
      scrape_timeout: 1m
      metrics_path: /metrics
      scheme: http
      static_configs:
      - targets:
        - groundcover-kube-state-metrics.groundcover.svc.cluster.local:8080

Metrics Cardinality Limits

groundcover limits the cardinality of the metrics collected from custom jobs.

Defaults:

remoteWrite.maxDailySeries: "5000000" # Max daily series churn
remoteWrite.maxHourlySeries: "1000000" # Max active series

To raise these limits:

custom-metrics:
  enabled: true
  extraArgs:
    remoteWrite.maxDailySeries: "<desired value>"
    remoteWrite.maxHourlySeries: "<desired value>"

Resource Configuration

Defaults:

custom-metrics:
  resources:
    limits:
      memory: 1024Mi
    requests:
      cpu: 100m
      memory: 256Mi
custom-metrics:
  resources:
    limits:
      cpu: <>
      memory: <>
    requests:
      cpu: <>
      memory: <>

Common Questions

How often are my metrics scraped?

Metrics are automatically scraped every 10 seconds unless explicitly specified in the scrape job.

Where can I find scraped metrics?

All metrics in the platform can be found in the metrics exploration page: https://app.groundcover.com/explore/data-explorer

Last updated