LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Create the groundcover namespace
  • Create secrets
  • Create the API Key secret
  • Create the ClickHouse password secret
  • Create the Argo CD Application Manifest
  • Inspect the groundcover namespace
Export as PDF
  1. Customization
  2. Customize deployment

Argo CD

Last updated 4 months ago

is a declarative, GitOps continuous delivery tool for Kubernetes. Argo CD aligns with the GitOps principles, ensuring that the deployment of groundcover is always in sync with the predefined configurations in your Git repository. This means that any changes made to the deployment configurations are automatically applied to the cluster, streamlining updates and ensuring that all instances of groundcover are consistent across different environments.

Argo CD’s multi-environment support ensures that groundcover can be deployed consistently across various Kubernetes clusters, whether they are designated for development, testing, or production.

To deploy groundcover through Argo CD, use the following steps.

The steps below require a user with admin permissions

Create the groundcover namespace

groundcover requires setting up several secrets in the installation namespace prior to creating the ArgoCD application. For that reason we will start by creating the groundcover namespace:

kubectl create namespace groundcover

Create secrets

In the following steps you will create the follow Kubernetes secrets objects:

  1. API Key secret

  2. ClickHouse password secret

Create the API Key secret

Step 1 - Fetch the API key

Start by fetching the API key associated with your workspace using the following CLI command:

groundcover auth print-api-key

Create a secret in the groundcover namespace using the following snippet:

Step 2 - Create the spec file

Create the spec file using the following snippet:

Make sure to replace the <apikey> value below with the value fetched in the previous step

cat << EOF > groundcover_apikey_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: groundcover-api-key
  namespace: groundcover
stringData:
  API_KEY: <apikey>
type: Opaque
EOF

Step 3 - Create the secret

Apply the spec file from above:

kubectl apply -f groundcover_apikey_secret.yaml

Create the ClickHouse password secret

Step 1 - Create a random password

Start by generating a random password for ClickHouse. For example using openssl rand:

opensslis just one way to do it - you can use any random string you wish

openssl rand -hex 16 

Step 2 - Create the spec file

Create the spec file using the following snippet:

Make sure to replace the <password> value below with the result of the previous step

cat << EOF > groundcover_clickhouse_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: groundcover-clickhouse
  namespace: groundcover
type: Opaque
stringData:
  admin-password: <password>
EOF

Step 3 - Create the secret

Apply the spec file from above:

kubectl apply -f groundcover_clickhouse_secret.yaml

Create the Argo CD Application Manifest

Make sure to set the following values in the manifest:

  • <project-name> - to match your environment

  • <targetRevision> - set the deployment version. EIther a specific groundcover chart version, or use "≥ 1.0.0" for auto upgrades. You can use the following commands to fetch the latest chart version:

    • helm repo update

    • helm search repo groundcover/groundcover

The <cluster-name> value in the values part in the manifest can be any name you wish to assign to the cluster where the platform is installed. In multi-cluster installations, make sure to change it according to the cluster being installed.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: groundcover
  namespace: argocd
spec:
  project: <project-name>
  source:
    chart: groundcover
    repoURL: https://helm.groundcover.com
    targetRevision: <groundcover-version>
    helm:
      releaseName: groundcover
      values: |
        clusterId: <cluster-name>
        global:
          groundcoverPredefinedTokenSecret:
            secretKey: API_KEY
            secretName: groundcover-api-key
          clickhouse:
            auth:
              existingSecretKey: admin-password
              existingSecret: groundcover-clickhouse
  destination:
    server: "https://kubernetes.default.svc"
    namespace: groundcover
  # avoid OutOfSync state on secrets with random-data.
  # https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#random-data
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
    - RespectIgnoreDifferences=true
  ignoreDifferences:
  - kind: Secret
    jsonPointers:
    - /data
    - /stringData
  - group: apps
    kind: '*'
    jsonPointers:
    - /spec/template/metadata/annotations/checksum~1secret

Inspect the groundcover namespace

After creating the manifest above the groundcover deployments will start spinning up in the namespace. When all pods are running you can access the platform at app.groundcover.comto access new data from the environment.

If you encounter any issues in the installation

Argo CD
let us know over Slack.