Managing Dashboards with Terraform
Create & manage dashboards with Terraform
Use Terraform to create, update, delete, and list groundcover dashboards as code. Managing dashboards with infrastructure‑as‑code (IaC) lets you version changes, review them in pull requests, promote the same definitions across environments, and detect drift between what’s applied and what’s running in your account.
Prerequisites
A groundcover account with permissions to create/edit Dashboards
A Terraform environment (groundcover provider >v1.1.1)
The groundcover Terraform provider configured with your API credentials
See also: groundcover Terraform provider reference for provider configuration and authentication details.
1) Creating a Dashboard via Terraform
1.1) Create a dashboard directly from the UI
In order to create a dashboard using Terraform you first need to create a dashboard manually in order to export it in a Terraform format.
See Creating dashboards to learn more.
1.2) Export the dashboard in Terraform format
You can export a Dashboard into as a Terraform resource:
Open the Dashboard.
Click Actions → Export.
Download or copy the Terraform tab’s content and paste it into your
.tf
file (see placeholder above).

1.3) Add the dashboard resource to your Terraform configuration
resource "groundcover_dashboard" "llm_observability" {
name = "LLM Observability"
description = "Dashboard to monitor OpenAI and Anthropic usage"
preset = "{\"widgets\":[{\"id\":\"B\",\"type\":\"widget\",\"name\":\"Total LLM Calls\",\"queries\":[{\"id\":\"A\",\"expr\":\"span_type:openai span_type:anthropic | stats by(span_type) count() count_all_result | sort by (count_all_result desc) | limit 5\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\"}},{\"id\":\"D\",\"type\":\"widget\",\"name\":\"LLM Calls Rate\",\"queries\":[{\"id\":\"A\",\"expr\":\"sum(rate(groundcover_resource_total_counter{type=~\\\"openai|anthropic\\\",status_code=\\\"ok\\\"})) by (gen_ai_request_model)\",\"dataType\":\"metrics\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"time-series\",\"selectedChartType\":\"stackedBar\"}},{\"id\":\"E\",\"type\":\"widget\",\"name\":\"Average LLM Response Time\",\"queries\":[{\"id\":\"A\",\"expr\":\"avg(groundcover_resource_latency_seconds{type=~\\\"openai|anthropic\\\"}) by (type)\",\"dataType\":\"metrics\",\"step\":\"disabled\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\",\"step\":\"disabled\",\"selectedUnit\":\"Seconds\"}},{\"id\":\"A\",\"type\":\"widget\",\"name\":\"Total LLM Tokens Used\",\"queries\":[{\"id\":\"A\",\"expr\":\"span_type:openai span_type:anthropic | stats by(span_type) sum(gen_ai.response.usage.total_tokens) sum_result | sort by (sum_result desc) | limit 5\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\",\"step\":\"disabled\"}},{\"id\":\"C\",\"type\":\"widget\",\"name\":\"AVG Input Tokens Per LLM Call \",\"queries\":[{\"id\":\"A\",\"expr\":\"span_type:openai OR span_type:anthropic | stats by(span_type) avg(gen_ai.response.usage.input_tokens) avg_result | sort by (avg_result desc) | limit 5\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\"}},{\"id\":\"F\",\"type\":\"widget\",\"name\":\"AVG Output Tokens Per LLM Call \",\"queries\":[{\"id\":\"A\",\"expr\":\"span_type:openai OR span_type:anthropic | stats by(span_type) avg(gen_ai.response.usage.output_tokens) avg_result | sort by (avg_result desc) | limit 5\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\",\"step\":\"disabled\"}},{\"id\":\"G\",\"type\":\"widget\",\"name\":\"Top Used Models\",\"queries\":[{\"id\":\"A\",\"expr\":\"span_type:openai OR span_type:anthropic | stats by(gen_ai.request.model) count() count_all_result | sort by (count_all_result desc) | limit 100\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"bar\",\"step\":\"disabled\"}},{\"id\":\"H\",\"type\":\"widget\",\"name\":\"Total LLM Errors \",\"queries\":[{\"id\":\"A\",\"expr\":\"(span_type:openai OR span_type:anthropic) status:error | stats by(span_type) count() count_all_result | sort by (count_all_result desc) | limit 1\",\"dataType\":\"traces\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"stat\"}},{\"id\":\"I\",\"type\":\"widget\",\"name\":\"AVG TTFT Over Time by Model\",\"queries\":[{\"id\":\"A\",\"expr\":\"avg(groundcover_workload_latency_seconds{gen_ai_system=~\\\"openai|anthropic\\\",quantile=\\\"0.50\\\"}) by (gen_ai_request_model)\",\"dataType\":\"metrics\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"time-series\",\"selectedChartType\":\"line\",\"selectedUnit\":\"Seconds\"}},{\"id\":\"J\",\"type\":\"widget\",\"name\":\"Avg Output Tokens Per Second by Model\",\"queries\":[{\"id\":\"A\",\"expr\":\"avg(groundcover_gen_ai_response_usage_output_tokens{}) by (gen_ai_request_model)\",\"dataType\":\"metrics\",\"editorMode\":\"builder\"},{\"id\":\"B\",\"expr\":\"avg(groundcover_workload_latency_seconds{quantile=\\\"0.50\\\"}) by (gen_ai_request_model)\",\"dataType\":\"metrics\",\"editorMode\":\"builder\"},{\"id\":\"formula-A\",\"expr\":\"A / B\",\"dataType\":\"metrics-formula\",\"editorMode\":\"builder\"}],\"visualizationConfig\":{\"type\":\"time-series\",\"selectedUnit\":\"Number\"}}],\"layout\":[{\"id\":\"B\",\"x\":0,\"y\":0,\"w\":4,\"h\":6,\"minH\":4},{\"id\":\"D\",\"x\":0,\"y\":30,\"w\":24,\"h\":6,\"minH\":4},{\"id\":\"E\",\"x\":8,\"y\":0,\"w\":8,\"h\":6,\"minH\":4},{\"id\":\"A\",\"x\":16,\"y\":0,\"w\":8,\"h\":6,\"minH\":4},{\"id\":\"C\",\"x\":0,\"y\":12,\"w\":8,\"h\":6,\"minH\":4},{\"id\":\"F\",\"x\":8,\"y\":24,\"w\":8,\"h\":6,\"minH\":4},{\"id\":\"G\",\"x\":16,\"y\":24,\"w\":8,\"h\":6,\"minH\":4},{\"id\":\"H\",\"x\":4,\"y\":0,\"w\":4,\"h\":6,\"minH\":4},{\"id\":\"I\",\"x\":0,\"y\":18,\"w\":24,\"h\":6,\"minH\":4},{\"id\":\"J\",\"x\":0,\"y\":3,\"w\":24,\"h\":6,\"minH\":4}],\"duration\":\"Last 15 minutes\",\"variables\":{},\"spec\":{\"layoutType\":\"ordered\"},\"schemaVersion\":4}"
}
After saving this file as main.tf
along with the provider details, type:
terraform plan
terraform apply
2) Managing existing provisioned Dashboard
2.1) "Provisioned" badge for IaC‑managed Dashboards
Dashboards added via Terraform are marked as Provisioned in the UI so you can quickly distinguish IaC‑managed Dashboards from manually created ones, both from the Dashboard List and inside the Dashboard itself.

2.2) Edit behavior for Provisioned Dashboards
Provisioned Dashboards are read‑only by default to protect the source of truth in your Terraform code.
To make a quick change, click Unlock dashboard. This allows editing directly in the UI, all changes are automatically saved as always.

Important: Any changes can be overwritten the next time your provisioner runs
terraform apply
.Safer alternative: Duplicate the Dashboard and edit the copy, then migrate those changes back into code.
2.3) Editing dashboards via Terraform
Changing the resource and reapplying Terraform willupdate the Dashboard in groundcover.
Deleting the resource from your code (and applying) will delete it from groundcover.
See more examples on our Github repo.
3) Importing existing Dashboards into Terraform
Already have a Dashboard in groundcover? Bring it under Terraform management without recreating it:
# Syntax
terraform import groundcover_dashboard.<local_name> <dashboard_id>
# Example
terraform import groundcover_dashboard.service_overview dsh_1234567890
After importing, run terraform plan
to view the state and align your config with what exists.
Reference
Creating dashboars – how to build widgets and layouts in the UI
groundcover Terraform provider Github repo – resource schema, arguments, and examples
Last updated