Ask or search…
K
Comment on page

OpenTelemetry

Learn how to collect OTel data with groundcover
groundcover ships with an instance of OpenTelemetry collector. Traces and Logs sent to the collector will be ingested and stored by groundcover, and displayed on groundcover's UI.
Traces collected via opentelemtry will have a matching source property

Finding the groundcover OpenTelemetry collector endpoint

The collector is accessible via Kubernetes' Services DNS on address:
Below are 2 options for the service name: Default is for installations that use the default namespace (groundcover) and default release name (groundcover). Custom is for installations where the namespace or release name were changed, and will require the custom values.
Default Release Name and Namespace
Custom Release Name / Namespace
groundcover-opentelemetry-collector.groundcover.svc.cluster.local
{RELEASE-NAME}-opentelemetry-collector.{NAMESPACE}.svc.cluster.local
This value is referenced below as {GROUNDCOVER-OTEL-COLLECTOR-ENDPOINT}.
Modify your instrumented service to use this endpoint as the collector. In most cases, this can be done by setting the OTEL_COLLECTOR_NAME environment variable to the collector's address, as seen in the example below.

Setting up workload attributes for instrumented services

groundcover follows OpenTelemetry's Semantic Conventions to look for attributes required to ingest the collected data and associate it with known resources in the environment - most importantly service.name, along with additional attributes to identify the workload as pod and namespace.
Here is an example of a how to modify a Kubernetes deployment object environment variables to use groundcover's collector and set these attributes:
  • Make sure you replace {GROUNDCOVER-OTEL-COLLECTOR-ENDPOINT} with the collector's endpoint, as specified above
  • There's no need to modify the OTEL_COLLECTOR_NAME param
env:
- name: OTEL_COLLECTOR_NAME
value: {GROUNDCOVER-OTEL-COLLECTOR-ENDPOINT}
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: OTEL_K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_K8S_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_K8S_POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.instance.id=$(OTEL_K8S_POD_UID),service.namespace=$(OTEL_K8S_NAMESPACE),k8s.namespace.name=$(OTEL_K8S_NAMESPACE),k8s.node.name=$(OTEL_K8S_NODE_NAME),k8s.pod.name=$(OTEL_K8S_POD_NAME)

Forwarding traces from an existing OpenTelemetry collector

If your services are configured to use an already existing OpenTelemetry collector, you can forward the traces it collects to groundcover's collector using the OTLP exporter, specifying our collector as the endpoint.
For example, define the exporter in the existing collector configuration:
exporters:
otlp/groundcover:
endpoint: {GROUNDCOVER-OTEL-COLLECTOR-ENDPOINT}:4317
tls:
insecure: true
and reference it in the relevant traces pipeline:
pipelines:
traces:
exporters:
- otlp/groundcover
For default installations, groundcover's OpenTelemetry Collector communicates with TLS turned off, which is why TLS is disabled in the above configuration. For more complex environments with ingestion of sources from outside the cluster, contact us on Slack!

What about forwarding logs?

groundcover collects all logs inside your k8s cluster out of the box, without the need for any additional setup. However, just like traces, we fully support forwarding logs from an existing OpenTelemetry collector. To do so, reference the exporter in the relevant logs pipeline:
pipelines:
logs:
exporters:
- otlp/groundcover