LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
Export as PDF
  1. Use groundcover

Role-Based Access Control (RBAC)

Last updated 1 month ago

This capability is only available to organizations subscribed to our .

Role-Based Access Control (RBAC) in groundcover gives you a flexible way to manage who can access certain features and data in the platform. By defining both default roles and policies, you ensure each team member only sees and does what their level of access permits. This approach strengthens security and simplifies onboarding, allowing administrators to confidently grant or limit access.

Policies

Policies are the foundational elements of groundcover’s RBAC. Each policy defines:

  1. A permission level – which actions the user can perform (Admin, Editor, or Viewer-like capabilities).

  2. A data scope – which clusters, environments, or namespaces the user can see.

By assigning one or more policies to a user, you can precisely control both what they can do and where they can do it.

Default Policies

groundcover provides three default policies to simplify common use cases:

  1. Default Admin Policy

    • Permission: Admin

    • Data Scope: Full (no restrictions)

    • Behavior: Unlimited access to groundcover features and configurations.

  2. Default Editor Policy

    • Permission: Editor

    • Data Scope: Full (no restrictions)

    • Behavior: Full creative/editing capabilities on observability data, but no user or system management.

  3. Default Viewer Policy

    • Permission: Viewer

    • Data Scope: Full (no restrictions)

    • Behavior: Read-only access to all data in groundcover.

These default policies allow you to quickly onboard new users with typical Admin/Editor/Viewer capabilities. However, you can also create custom policies with narrower data scopes, if needed.

Policy Structure

A policy’s data scope can be defined in two modes: Simple or Advanced.

  1. Simple Mode

    • Uses AND logic across the specified conditions.

    • Applies the same scope to all entity types (e.g., logs, traces, events, workloads).

    • Example: “Cluster = Dev AND Environment = QA,” restricting all logs, traces, events, etc. to the Dev cluster and QA environment.

  2. Advanced Mode

    • Lets you define different scopes for each data entity (logs, traces, events, workloads, etc.).

    • Each scope can use OR logic among conditions, allowing more fine-grained control.

    • Example:

      • Logs: “Cluster = Dev OR Prod,”

      • Traces: “Namespace = abc123,”

      • Events: “Environment = Staging OR Prod.”

When creating or editing a policy, you select permission (Admin, Editor, or Viewer) and a data scope mode (Simple or Advanced).

Multiple Policies

A user can be associated with multiple policies. When that occurs:

  1. Permission Merging

    • The user’s final permission level is the highest among all assigned policies.

    • Example: If one policy grants Editor and another grants Viewer, the user is effectively an Editor overall.

  2. Data Scope Merging

    • Data scopes merge via OR logic, broadening the user’s overall data access.

    • Example: Policy A => “Cluster = A,” Policy B => “Environment = B,” so final scope is “Cluster A OR Environment B.”

  3. Metrics Exception

    • For metrics data only, groundcover uses a single policy’s scope (not a combination). This prevents creating an overly broad metrics view when multiple policies are assigned.

By combining multiple policies, you can support sophisticated permission setups—for example, granting Editor capabilities in certain clusters while restricting a user to Viewer in others. The user’s final access reflects the highest permission among their assigned policies and the union (OR) of scopes for all data types except metrics.


In summary:

  • Policies define both permission (Admin, Editor, or Viewer) and data scope (clusters, environments, namespaces).

  • Default Policies (Admin, Editor, Viewer) provide no data restrictions, suitable for quick onboarding.

  • Custom Policies allow more granular restrictions, specifying exactly which entities a user can see or modify.

  • Multiple Policies can co-exist, merging permission levels and data scopes (with a special rule for metrics).

This flexible system gives you robust control over observability data in groundcover, ensuring each user has precisely the access they need.

Enterprise plan