Logs-to-Metrics

Overview

Transform your log data into queryable metrics for long-term monitoring, alerting, and cost-effective analysis. Logs-to-metrics parsing extracts numerical data from unstructured log messages and converts them into time-series metrics that can be visualized, alerted on, and retained at a fraction of the cost.

Why Use Logs-to-Metrics?

Logs are perfect for debugging specific events, but they become expensive and unwieldy at scale. Metrics, on the other hand, are:

  • Cost-effective - Store aggregated data instead of every log line

  • Fast to query - Optimized for time-series analysis

  • Perfect for alerting - Track trends and thresholds over time

The Transformation

Think of it like converting sentences into spreadsheet rows:

A Log (sentence):

INFO: HTTP GET /api/users request completed in 55ms with status 200

Becomes Metrics (structured data):

Timestamp
Metric Name
Value
Labels

[now]

http_requests_total

1

method:GET, endpoint:/api/users, status:200

[now]

http_request_duration_ms

55

method:GET, endpoint:/api/users, status:200

You're essentially turning descriptive text into countable, measurable data points.

When to Use Logs-to-Metrics

Logs-to-metrics doesn't replace logs—it complements them. Use it for these scenarios:

1. Counting Business Events

Track how often specific business or application events occur.

Use cases:

  • User logins, registrations, or logouts

  • Payment transactions (successful, failed, pending)

  • Items added to cart or checkout completions

  • Feature usage or API endpoint calls

Example:

Create a metric user_logins_total by counting every log containing "User successfully authenticated."

2. Monitoring Error Rates

Track error frequency to understand system health and set up alerts.

Use cases:

  • Application ERROR or FATAL level logs

  • Failed database queries or timeout errors

  • Service degradation indicators

Example:

Create a metric requests_total with status code labels, then calculate error rate:

3. Monitoring Legacy or Third-Party Applications

Extract metrics from applications you can't modify to add instrumentation.

Use cases:

  • Legacy systems that only write to log files

  • Third-party tools without native metrics

  • Vendor applications without metric exporters

  • Containerized apps without metric endpoints

Example:

Parse log files from an old Java application to extract performance indicators like active connections or tasks processed.

How Logs-to-Metrics Works

Logs-to-metrics parsing uses the special l2m map to define what metrics to create and how to aggregate them.

Available Operations

groundcover supports four metric aggregation operations:

Function
Description
Use Case

log_to_metric_count

Count logs matching criteria

Request counts, event occurrences

log_to_metric_sum

Sum extracted values

Total bytes transferred, total duration

log_to_metric_max

Maximum value observed

Peak response time, largest payload

log_to_metric_min

Minimum value observed

Fastest response time, smallest payload

circle-info

groundcover automatically adds a _gc_op suffix with the operation type to generated metrics (e.g., _sum, _min, _max, _count).

Basic Structure

Best Practices

  1. Choose meaningful metric names - Use descriptive names that indicate what's being measured

    • Good: http_requests_total, payment_transactions_count

    • Bad: metric1, counter

  2. Use appropriate labels - Add dimensions that help you slice and dice the data

    • Common labels: endpoint, method, status, region, user_type

    • Avoid high-cardinality labels (user IDs, unique transaction IDs)

  3. Parse before creating metrics - Extract and clean data in earlier rules, then create metrics

    • Parse JSON or GROK patterns first

    • Create l2m metrics in a separate rule

  4. Combine operations - Use count, sum, min, and max together for comprehensive insights

    • Count requests + sum duration = average latency

    • Min/max provide performance bounds

  5. Use type conversion - Always convert values to Double() for sum/min/max operations

    • Double(attributes["duration"]) not attributes["duration"]

  6. Test with Parsing Playground - Verify your l2m rules extract the right data before deploying

Viewing Your Metrics

After creating logs-to-metrics rules:

  1. Metrics appear in Metrics Explorerarrow-up-right within minutes

  2. Use PromQL to query your custom metrics

  3. Create dashboardsarrow-up-right to visualize trends

  4. Set up monitorsarrow-up-right for alerting on thresholds

Common Use Cases

Tracking Request Duration

Monitor response times with min, max, and sum.

Input logs:

Output metrics:

Monitoring Data Transfer Volume

Track bytes sent/received.

Input logs:

Output metrics:

Counting Business Events

Track user actions and business metrics.

Input logs:

Output metrics:

Complete Example

Here's a comprehensive example parsing Kong access logs:

Input logs:

Output metrics:

Metric
Value

kong_request_volume_sum_gc_op

214

kong_request_volume_min_gc_op

6

kong_request_volume_max_gc_op

189

kong_access_log_metrics_count_gc_op

3

These metrics are now available in the Metrics Explorer for querying, visualization, and alerting!

Key Functions

l2m Map

The l2m map stores the labels (dimensions) for your metrics.

log_to_metric_count

Counts the number of logs matching the rule.

Syntax:

Use for: Request counts, event occurrences, error counts

log_to_metric_sum

Sums numerical values from logs.

Syntax:

Use for: Total bytes, total duration, cumulative values

log_to_metric_max

Tracks the maximum value observed.

Syntax:

Use for: Peak response times, largest payloads

log_to_metric_min

Tracks the minimum value observed.

Syntax:

Use for: Fastest response times, smallest payloads

Last updated