LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
    • groundcover MCP
      • Configure groundcover's MCP Server
      • Getting-started Prompts
      • Real-world Use Cases
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • APIs
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
      • Send groundcover Alerts to Email via Zapier
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Working with fields and attributes
  • Fallible functions
  • Option #1 - Handling the error
  • Option #2 - Aborting on error
  • Choosing which method to use
Export as PDF
  1. Use groundcover
  2. Configuring Pipelines

Writing Remap Transforms

Last updated 9 months ago

Below are the essentials relevant to writing remap transforms in groundcover. Extended information can be found in .

We support using all types of as pipeline steps.

For testing VRL before deployment we recommend the .

Working with fields and attributes

When processing Vector events, fields names need to be prefixed by . , a single period. For example, the content field in a log, representing the body of the log, is accessible using .content. Specifically in groundcover, attributes parsed from logs or associated with traces will be stored under the string_attributes for string values, and under float_attributes for numerical values. Accessing attributes is possible by adding additional . as needed. For example, a JSON log that looks like this:

{"name":"my-log","count":5} 

Will be translated into an event with the following attributes:

.string_attributes.name --> "my-log"
.float_attributes.count --> 5

Fallible functions

Each of Vector's built-in function can be either fallible or infallible. Fallible functions can throw an error when called, and require error handling, whereas infallible functions will never throw an error.

When writing Vector transforms in VRL it's important to use error handling where needed. Below are the two ways error handling in Vector is possible - see more on .

VRL code without proper error handling will throw an error during compilation, resulting in error logs in the Vector deployment.

Option #1 - Handling the error

Let's take a look at the following code.

.parsed, .err = parse_json("{\"Hello\": \"World!\"}")
if err == null {
  // do something with .parsed
}

The code above can either succeed in parsing the json, or fail in parsing it. The err variable will contain indication of the result status, and we can proceed accordingly.

Option #2 - Aborting on error

Let's take a look at this slightly different version of the code above:

parsed = parse_json!("{\"Hello\": \"World!\"}")

This time there's no error handling around, but ! was added after the function call.

This method of error handling is called abort on error - it will fail the transform entirely if the function returns an error, and proceed normally otherwise.

Choosing which method to use

Both methods above are valid VRL for handling errors, and you must choose one or the other when handling fallible functions. However, they carry one big difference in terms of pipelines in groundcover:

  • Transforms which use option #1 (error handling) will not stop the pipeline in case of error - the following steps will continue to execute normally. This is useful when writing optional enrichment steps that could potentially fail with no issue.

  • Transforms which use option #2 (abort) will stop the pipeline in case of error - the event will not proceed to the other steps. This is mostly useful for mandatory steps which can't fail no matter what.

The default behavior above can be changed using the flag. When this flag is set to false, errors encountered will never stop the pipeline - both for method #1 and for method #2.

This is useful for writing simpler code with less explicit error handling, as can be seen in this .

Vector's documentation
Vector transforms
VRL playground
these docs
drop_on_error
log pipeline example