Multi-cluster deployment

By default, when you install groundcover on several clusters, each cluster will contain its own independent set of traces, metrics and logs databases (the groundcover backend). The following guide will walk you through how to install groundcover on multiple clusters while using a single, centralized instance of each of these databases.

Requirements

Architecture Overview

In this installation mode, we will deploy the following components separately:

Deploy the backend

  • Create the following backend-values.yaml file and fill the required values accordingly

global:
  agent:
    enabled: false

opentelemetry-collector:
  ingress:
    enabled: true
    annotations:
      ## ALB
      # kubernetes.io/ingress.class: alb
      # alb.ingress.kubernetes.io/healthcheck-port: 13133
      # alb.ingress.kubernetes.io/healthcheck-path: /health
      # alb.ingress.kubernetes.io/target-type: ip
      # alb.ingress.kubernetes.io/certificate-arn: {cert-arn}
    hosts:
      - host: {host}
        paths:
        - path: /health
          port: 13133
          pathType: Exact
        - path: /loki
          pathType: Prefix
          port: 3100
    additionalIngresses:
      - name: otlp-grpc
        annotations:
      ## ALB
      # kubernetes.io/ingress.class: alb
      # alb.ingress.kubernetes.io/healthcheck-port: 13133
      # alb.ingress.kubernetes.io/healthcheck-path: /health
      # alb.ingress.kubernetes.io/backend-protocol-version: GRPC
      # alb.ingress.kubernetes.io/target-type: ip
      # alb.ingress.kubernetes.io/certificate-arn: {cert-arn}
        hosts:
          - host: {host}
            paths:
              - path: /
                pathType: Prefix
                port: 4317

metrics-ingester:
  ingress:
    enabled: true
    annotations:
    ## ALB
    # kubernetes.io/ingress.class: alb
    # alb.ingress.kubernetes.io/healthcheck-path: /health
    # alb.ingress.kubernetes.io/target-type: ip
    # alb.ingress.kubernetes.io/certificate-arn: {cert-arn}
    hosts:
      - name: {host}
        path: /health
        port: http
      - name: {host}
        path: /api/v1/write
        port: http
  • Run the following installation command:

helm upgrade \
    -n groundcover \
    --create-namespace \
    -i groundcover-backend \
    groundcover/groundcover \
    --set clusterId=<cluster-name> \
    --set global.groundcover_token=<apikey> \
    -f backend-values.yaml
  • Obtain the addresses that are used by the created ingresses, using:

kubectl get ingress -n groundcover

Now, make sure the ingresses you've just created are accessible from the clusters you intend to deploy the groundcover agent on.

Deploy the agent

  • Create the following agent-values.yaml, and fill the required values accordingly

global:
  backend:
    enabled: false
  logs:
    # for example https://opentelemtry-collector-http-ingress.net
    overrideUrl: {opentelemtry-collector-http-ingress-endpoint}
  metrics:
    # for example https://metrics-ingester-http-ingress.net
    overrideUrl: {metrics-ingester-http-ingress-endpoint}
  datadogapm:
    overrideUrl: {opentelemtry-collector-http-ingress-endpoint}
  otlp:
    # for example opentelemtry-collector-grpc-ingress.net:443
    overrideGrpcURL: {opentelemtry-collector-grpc-ingress-endpoint} 
    overrideHttpURL: {opentelemtry-collector-http-ingress-endpoint}
  • Run the following installation command:

helm upgrade \
    -n groundcover \
    --create-namespace \
    -i groundcover-agent \
    groundcover/groundcover \
    --set clusterId=<cluster-name> \
    --set global.groundcover_token=<apikey> \
    -f agent-values.yaml

Optional - expose ClickHouse Ingress to your Grafana

In case you're interested in using groundcover's ClickHouse datasource in your own Grafana, follow these steps:

  • Add the following override to the backend values

clickhouse:
  ingress:
    enabled: true
    hostname: {host}
    ## ALB
    ingressClassName: "alb"
    path: /
    pathType: Prefix
    annotations:
    ## ALB
    # alb.ingress.kubernetes.io/healthcheck-path: /
    # alb.ingress.kubernetes.io/target-type: ip
    # alb.ingress.kubernetes.io/certificate-arn: {cert-arn}
  • Fetch the ClickHouse password for the config (either in the env or from the injected secret)

  • Create a new Grafana datasource and fill the required fields, make sure Server port, Skip TLS Verify are matching your ingress configuration

Last updated