Multi-Cluster installation
By default, when you install groundcover on several clusters, each cluster will contain its own independent set of traces, metrics and logs databases (the groundcover backend).
The following guide will walk you through how to install groundcover on multiple clusters while using a single centralized instance of each of these databases.
- Accessible hostnames from outside the cluster that will be used by the deployed ingresses for the following components (see groundcover backend):
shepherd
opentelemetry-collector
In this installation mode, we will deploy the following components separately:
- 2.Add the groundcover helm repository and/or make sure it's up to datehelm repo add groundcover https://helm.groundcover.com # only for the first timehelm repo update groundcover
- 3.Create the following
backend-values.yaml
file and fill the required values accordinglyglobal:otlp:tls:enabled: falsemultipleClusterIds:# a list of the the cluster names that will be monitored# - dev# - staging# - prodopentelemetry-collector:ingress:enabled: trueannotations:## ALB# kubernetes.io/ingress.class: alb# alb.ingress.kubernetes.io/target-type: ip# alb.ingress.kubernetes.io/certificate-arn: {cert-arn}hosts:- host: {host}paths:- path: /healthport: 13133pathType: Exact- path: /lokipathType: Prefixport: 3100additionalIngresses:- name: otlp-grpcannotations:## ALB# kubernetes.io/ingress.class: alb# alb.ingress.kubernetes.io/target-type: ip# alb.ingress.kubernetes.io/certificate-arn: {cert-arn}hosts:- host: {host}paths:- path: /pathType: Prefixport: 4317# optional - if custom-metrics is enabledvictoria-metrics-single:server:ingress:enabled: trueannotations:## ALB# kubernetes.io/ingress.class: alb# alb.ingress.kubernetes.io/target-type: ip# alb.ingress.kubernetes.io/certificate-arn: {cert-arn}hosts:- name: {host}path: "/"port: httpshepherd:## ALBconfig:ingestor:TLSCertFile: ""TLSKeyFile: ""httpIngress:enabled: trueannotations:## ALB# kubernetes.io/ingress.class: alb# alb.ingress.kubernetes.io/target-type: ip# alb.ingress.kubernetes.io/certificate-arn: {cert-arn}hosts:- path: /# if host is pre-known (e.g not generated upon ingress creation)# host: {the shepherd host that will be used by the agents outside of the cluster}grpcIngress:enabled: trueannotations:## ALB# kubernetes.io/ingress.class: alb# alb.ingress.kubernetes.io/backend-protocol-version: GRPC# alb.ingress.kubernetes.io/target-type: ip# alb.ingress.kubernetes.io/certificate-arn: {cert-arn}hosts:- path: /# if host is pre-known (e.g not generated upon ingress creation)# host: {the shepherd host that will be used by the agents outside of the cluster} - 4.Run the following installation commandhelm upgrade -i groundcover-backend groundcover/groundcover --set global.groundcover_token=<apikey>,clusterId={cluster id of backend cluster} -f backend-values.yaml --create-namespace -n groundcoverNow, make sure the ingresses you've just created are accessible from the clusters you intend to deploy the groundcover agent on.
- 1.Obtain the addresses that are used by the created ingresses, usingkubectl get ingress -n groundcover
- 2.Create the following
agent-values.yaml
, and fill the required values accordinglyglobal:logs:# for example https://opentelemtry-collector-http-ingress.netoverrideUrl: {opentelemtry-collector-http-ingress-endpoint}otlp:# for example opentelemtry-collector-grpc-ingress.net:443overrideGrpcURL: {opentelemtry-collector-grpc-ingress-endpoint}overrideHttpURL: {opentelemtry-collector-http-ingress-endpoint}backend:enabled: false# optional if custom-metrics enabledcustom-metrics:remoteWriteUrls:# for example https://victoria-metrics-ingress.net/api/v1/write- {victoria-metrics-ingress-endpoint}/api/v1/writeshepherd:service:grpcPort: 443# for example https://shepherd-ingress.netoverrideHttpURL: {shepherd-http-ingress-endpoint}# otel is grpc server, so the format is {shepherd.net}:{443}overrideGrpcURL: {shepherd-grpc-ingress-endpoint}:443 - 3.Run the following installation command on each of the target clustershelm upgrade -i groundcover groundcover/groundcover --set global.groundcover_token=<apikey>,clusterId={cluster id} -f agent-values.yaml --create-namespace -n groundcoverMake sure the
clusterId
parameter corresponds to one of the cluster ids listed in the backend deployment. - 4.
Last modified 5d ago