LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • Prerequisites
  • Supported authentication methods
  • Supported endpoints
  • Prometheus
  • OpenTelemetry
  • DataDog
  • AWS Firehose
  • JSON Logs
Export as PDF
  1. Architecture
  2. inCloud Managed

Ingestion Endpoints

Last updated 2 months ago

Our inCloud Managed backend supports ingestion of various standard formats for metrics, traces and logs. It can be used to ingest telemetry from outside your kubernetes clusters, displaying it natively inside the groundcover platform.

Prerequisites

Every ingestion endpoint below requires two things:

  1. inCloud site - the installation domain specific to your backend

  2. api-key - used to authenticate with your backend

Fetching the inCloud site

Your inCloud site is part of the configuration provided to you by groundcover when setting up the managed inCloud backend. It can be located in the installation values, marked below as {inCloud_Site}:

global:
  ingress:
    site: {inCloud_Site}

Can't find your inCloud site value?

Fetching the api-key

The api-key used to ingest data into groundcover can be recovered using our built in CLI, using the following command:

groundcover auth print-api-key

The api-key printed will be referenced below as {api-key}

Supported authentication methods

We support several methods of authentication when pushing data into groundcover. This applies for all endpoints, formats and protocols.

Header Key-Value

Add a header with one of the following keys, containing the {api-key} value:

  1. token

  2. apikey

  3. dd-api-key

  4. X-Amz-Firehose-Access-Key

  5. Authorization

Basic Authentication

When using basic authentication, use the following params:

Username: groundcover

Password: {api-key}

Supported endpoints

All endpoints are accepted over HTTPS or gRPCS on port 443

Prometheus

Name
Endpoint

Prometheus remote write

https://{inCloud_Site}/api/v1/write

Prometheus exposition format

https://{inCloud_Site}/api/v1/import/prometheus

OpenTelemetry

Name
Endpoint

gRPC (Logs, Traces, Metrics)

{inCloud_Site}

HTTP Logs

https://{inCloud_Site}/v1/logs

HTTP Traces

https://{inCloud_Site}/v1/traces

HTTP Metrics

https://{inCloud_Site}/v1/metrics

DataDog

Name
Endpoint

Metrics V1

https://{inCloud_Site}/datadog/api/v1/series

Metrics V2

https://{inCloud_Site}/datadog/api/v2/series

Traces V0.3

https://{inCloud_Site}/v0.3/traces

Traces V0.4

https://{inCloud_Site}/v0.4/traces

Traces V0.5

https://{inCloud_Site}/v0.5/traces

Traces V0.7

https://{inCloud_Site}/v0.7/traces

AWS Firehose

Firehose ingestion requires setting up of a public endpoint, which was not available by default for inCloud Managed deployments provisioned before July 31st 2024.

Please contact us if your deployment is older in order to provision the needed endpoint.

Name
Endpoint

Firehose Logs

https://{inCloud_Site}/firehose/logs

JSON Logs

Name
Endpoint

JSON Logs

https://{inCloud_Site}/json/logs

Let us know over Slack.