LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Sending logs to the endpoint
  • URL
  • Method
  • Authentication
  • Content Type
  • Structure
  • Example
  • Recommended Attributes
Export as PDF
  1. Integrations
  2. Data sources

JSON Logs

Last updated 1 month ago

This feature is only available for enterprise plan.

groundcover utilizes a generic HTTP endpoint for logs in JSON format. This can be useful for integration with 3rd party services and additional collectors, or for importing log files manually.

Sending logs to the endpoint

URL

The url should be https://{incloud-site}/json/logs as described in . Make sure to use the inCloud Managed site fit for your deployment.

Method

HTTP method should be set to POST. No other methods are supported.

Authentication

To authenticate the request you can use any of the methods described . The most simple method is using an apikeyheader with your apikey, which can be fetched using the command groundcover auth print-api-key. See example for more details.

Content Type

One request can contain multiple logs by setting the payload as a JSON array (where every entry is a log line) or by using ndjson. Make sure to set the Content-Typeheader appropriately.

Format
Required Content-Type

JSON array

application/json

ndjson

application/x-ndjson

This endpoint only supports JSON formatted logs. For other supported formats please check our page.

Structure

groundcover parses JSON logs using this logic:

  • Timestamp is extracted from the either of the keys timestamp, timeor ts . It is recommended to use RFC 3339 Nano format.

  • Log message is extracted from either of the keys message, msg, log, body, content or text

  • Additional fields will be stored as log attributes.

Example

This curlcommand sends two log lines to an example groundcover backend:

curl -X POST "https://test-site.com/json/logs" \
-H "Content-Type: application/x-ndjson" \
-H "apikey: your-api-key" \
--data-binary '{"timestamp": "2025-01-01T10:45:32.123456799Z", "msg": "sending log to groundcover", "extra-attribute": "groundcover is cool"}
{"timestamp": "2025-01-01T10:45:32.123457799Z", "msg": "more logs to groundocver", "key2": "value2", "key3": "value3"}'

Recommended Attributes

It is recommended to add the following attributes to your logs to enrich them, associating them with common groundcover concepts and making them easier to find and filter.

These are optional attributes but we highly recommend adding those where possible

Name
groundcover meaning

service.name

Workload

clusterId

Cluster

env_name

Env

gc_source_type

Source

namespace

Namespace

For example:

{
  "timestamp": "2025-01-01T10:45:32.123456799Z",
  "msg": "my awesome log line",
  "service.name": "my-workload",
  "clusterId": "my-cluster",
  "env_name": "my-env",
  "gc_source_type": "my-json-sender",
  "namespace": "my-namespace"
}
datasources
below
these docs
here