LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Slack Webhook Message on Specific Monitor
  • Send Only Firing Alerts
  • Slack Webhook Message on Issue
  • Create Jira Ticket Using Webhook
Export as PDF
  1. Use groundcover
  2. Workflows

Workflow Examples

Slack Webhook Message on Specific Monitor

This workflow is triggered by issue with filter of alertname: Workload Pods Crashed Monitor . Which means only issues created by the monitor named "Workload Pods Crashed Monitor" will trigger the workflow, in this example we use a slack message action using labels from the issue.

workflow: 
  id: slack-alert-on-crashed-pods
  description: Send a slack message on Workload Pods Crashed Monitor 
  triggers:
    - type: alert
      filters:
        - key: alertname
          value: Workload Pods Crashed Monitor
  actions:
    - name: trigger-slack
      provider:
        type: slack
        config: '{{ providers.slack_webhook }}'
        with:
          message: "Pod Crashed - Pod: {{ alert.labels.pod_name }} Container: {{ alert.labels.container }} Exit Code: {{ alert.labels.exit_code }} Reason: {{ alert.labels.reason }}"

Send Only Firing Alerts

In some cases, you may want to avoid sending resolved alerts to your integrations—this prevents incidents from being automatically marked as “resolved” in tools like PagerDuty.

To achieve this, you can add a condition to your action that ensures only firing alerts are sent. Here’s an example of how to configure it in your workflow:

workflow:
  id: send-pagerduty-only-firing
  description: ""
  triggers:
  - type: alert
  name: send-pagerduty-only-firing
  actions:
  - if: '{{ alert.status }} == "firing"'
    name: pagerduty-action
    provider:
      config: "{{ provider.pager_duty_prod }}"
      type: pagerduty
      with:
        title: "{{ alert.alertname }}"

This configuration uses an if condition to check that the alert’s status is firing before executing the PagerDuty action - line 8.

Slack Webhook Message on Issue

workflow: 
  id: slack-webhook
  description: Send a slack message on alerts
  triggers:
    - type: alert
  actions:
    - name: trigger-slack
      provider:
        type: slack
        config: ' {{ providers.slack_webhook }} '
        with:
          blocks:
          - type: header
            text:
              type: plain_text
              text: ':rotating_light: {{ alert.labels.alertname }} :rotating_light:'
              emoji: true
          - type: divider
          - type: section
            fields:
            - type: mrkdwn
              text: |-
                *Cluster:*
                {{ alert.labels.cluster}}
            - type: mrkdwn
              text: |-
                *Namespace:*
                {{ alert.labels.namespace}}
            - type: mrkdwn
              text: |-
                *Workload:*
                {{ alert.labels.workload}}

Create Jira Ticket Using Webhook

See Jira Webhook Integration for setting up the integration

Replace in this workflow these by using your values: <issue_id>, <project_id>

Replace in this workflow <integration_name> based on your created integration name.

workflow:
  id: jira_ticket_creation
  description: Create a Jira Ticket
  triggers:
  - type: alert
  consts:
    description: keep.dictget( {{ alert.annotations }}, "_gc_description", '')
    title: keep.dictget( {{ alert.annotations }}, "_gc_issue_header", "{{ alert.alertname }}")
  name: jira_ticket_creation
  actions:
  - name: webhook
    provider:
      config: ' {{ providers.<integration_name> }} '
      type: webhook
      with:
        body:
          fields:
            description: '{{ consts.description }}'
            issuetype:
              id: <issue_id>
            project:
              id: <project_id>
            summary: '{{ consts.title }}'

Last updated 1 month ago

This workflow is triggered by an issue and uses the slack_webhook integration to send a Slack message formatted with Block Kit. For more details, see .

Get your Issue Type ID from your Jira, see:

Get your project id from you Jira, see:

Slack Block Kit
https://confluence.atlassian.com/jirasoftwarecloud/finding-the-issue-type-id-in-jira-cloud-1333825937.html
https://confluence.atlassian.com/jirakb/how-to-get-project-id-from-the-jira-user-interface-827341414.html