# Query Metrics

### Endpoints

#### Instant Query

**GET** `/api/prometheus/api/v1/query`

Execute an instant PromQL query to get metric values at a single point in time.

#### Range Query

**POST** `/api/metrics/query-range`

Execute a PromQL query over a time range to get time-series data.

### Authentication

Both endpoints require API Key authentication via the Authorization header.

### Instant Query Endpoint

#### Request

**GET** `/api/prometheus/api/v1/query`

**Headers**

```bash
Authorization: Bearer <YOUR_API_KEY>
Accept: application/json
```

**Query Parameters**

| Parameter | Type   | Required | Description                                                                            |
| --------- | ------ | -------- | -------------------------------------------------------------------------------------- |
| `query`   | string | **Yes**  | PromQL query string (URL encoded)                                                      |
| `time`    | string | No       | **Single point in time** to evaluate the query (RFC3339 format). Default: current time |

**Understanding the Time Parameter**

The `time` parameter specifies **exactly one timestamp** at which to evaluate your PromQL expression. This is NOT a time range:

* **With `time`**: "What was the disk usage at 2025-10-21T09:21:44.398Z?"
* **Without `time`**: "What is the disk usage right now?"

**Important**: This is different from range queries which return time-series data over a period.

**Instant vs Range Queries - Key Differences**

| Aspect             | Instant Query                    | Range Query                               |
| ------------------ | -------------------------------- | ----------------------------------------- |
| **Purpose**        | Get value at one specific moment | Get time-series data over a period        |
| **Time Parameter** | Single timestamp (`time`)        | Start and end timestamps (`start`, `end`) |
| **Result**         | One data point                   | Multiple data points over time            |
| **Use Case**       | "What is the current CPU usage?" | "Show me CPU usage over the last hour"    |
| **Response**       | Single value with timestamp      | Array of timestamp-value pairs            |

**Example Comparison:**

* **Instant**: `time=2025-10-21T09:00:00Z` → Returns disk usage at exactly 9:00 AM
* **Range**: `start=2025-10-21T08:00:00Z&end=2025-10-21T09:00:00Z` → Returns hourly disk usage trend

**Practical Example:**

```bash
# Get current value (no time parameter)
# Returns: {"value":[1761040224,"18.45"]} - timestamp is "right now"
curl '...query=avg(groundcover_node_rt_disk_space_used_percent{})' 

# Get historical value (with time parameter) 
# Returns: {"value":[1761038504,"18.44"]} - timestamp is exactly what you specified
curl '...query=avg(groundcover_node_rt_disk_space_used_percent{})&time=2025-10-21T09:21:44.398Z'
```

#### Response

The endpoint returns a Prometheus-compatible response format:

```json
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {},
        "value": [1761038504.398, "18.442597642017"]
      }
    ]
  },
  "stats": {
    "seriesFetched": "12",
    "executionTimeMsec": 0
  }
}
```

**Response Fields**

| Field                     | Type   | Description                                                          |
| ------------------------- | ------ | -------------------------------------------------------------------- |
| `status`                  | string | Query execution status (`"success"` or `"error"`)                    |
| `data.resultType`         | string | Type of result data (`"vector"`, `"matrix"`, `"scalar"`, `"string"`) |
| `data.result`             | array  | Array of metric results                                              |
| `data.result[].metric`    | object | Metric labels as key-value pairs                                     |
| `data.result[].value`     | array  | `[timestamp, value]` tuple with Unix timestamp and string value      |
| `stats.seriesFetched`     | string | Number of time series processed                                      |
| `stats.executionTimeMsec` | number | Query execution time in milliseconds                                 |

### Range Query Endpoint

#### Request

**POST** `/api/metrics/query-range`

**Headers**

```bash
Authorization: Bearer <YOUR_API_KEY>
Accept: application/json
Content-Type: application/json
```

**Request Body**

```json
{
  "promql": "string",
  "start": "string",
  "end": "string", 
  "step": "string"
}
```

**Request Parameters**

| Parameter | Type   | Required | Description                                                   |
| --------- | ------ | -------- | ------------------------------------------------------------- |
| `promql`  | string | **Yes**  | PromQL query expression                                       |
| `start`   | string | **Yes**  | Range start time in RFC3339 format                            |
| `end`     | string | **Yes**  | Range end time in RFC3339 format                              |
| `step`    | string | **Yes**  | Query resolution step (e.g., `"30s"`, `"1m"`, `"5m"`, `"1h"`) |

#### Response

The range query returns a custom format optimized for time-series data:

```json
{
  "velocities": [
    {
      "velocity": [
        [1760950800, "21.534558665381155"],
        [1760952600, "21.532404350483848"],
        [1760954400, "21.57135294176692"]
      ],
      "metric": {}
    }
  ],
  "promql": "avg(groundcover_node_rt_disk_space_used_percent{})"
}
```

**Response Fields**

| Field                   | Type   | Description                               |
| ----------------------- | ------ | ----------------------------------------- |
| `velocities`            | array  | Array of time-series data objects         |
| `velocities[].velocity` | array  | Array of `[timestamp, value]` data points |
| `velocities[].metric`   | object | Metric labels as key-value pairs          |
| `promql`                | string | Echo of the executed PromQL query         |

Each data point in the `velocity` array contains:

* **Timestamp**: Unix timestamp as integer
* **Value**: Metric value as string

### Examples

#### Instant Query Examples

**Current Values (No Time Parameter)**

Get current average disk space usage (evaluated at request time):

```bash
curl 'https://app.groundcover.com/api/prometheus/api/v1/query?query=avg%28groundcover_node_rt_disk_space_used_percent%7B%7D%29' \
  -H 'accept: application/json' \
  -H 'authorization: Bearer <YOUR_API_KEY>'
```

**Historical Point-in-Time Values**

Get disk usage at a specific moment in the past:

```bash
curl 'https://app.groundcover.com/api/prometheus/api/v1/query?query=avg%28groundcover_node_rt_disk_space_used_percent%7B%7D%29&time=2025-10-21T09%3A21%3A44.398Z' \
  -H 'accept: application/json' \
  -H 'authorization: Bearer <YOUR_API_KEY>'
```

**Note**: This returns the disk usage value at exactly `2025-10-21T09:21:44.398Z`, not a range from that time until now.

#### Range Query Examples

**24-Hour Disk Usage Trend**

Get disk usage over the last 24 hours with 30-minute resolution:

```bash
curl 'https://app.groundcover.com/api/metrics/query-range' \
  -H 'accept: application/json' \
  -H 'authorization: Bearer <YOUR_API_KEY>' \
  -H 'content-type: application/json' \
  --data-raw '{
    "promql": "avg(groundcover_node_rt_disk_space_used_percent{})",
    "start": "2025-10-20T09:19:22.475Z",
    "end": "2025-10-21T09:19:22.475Z",
    "step": "1800s"
  }'
```

**High-Resolution CPU Monitoring**

Monitor CPU usage over 1 hour with 1-minute resolution:

```bash
curl 'https://app.groundcover.com/api/metrics/query-range' \
  -H 'accept: application/json' \
  -H 'authorization: Bearer <YOUR_API_KEY>' \
  -H 'content-type: application/json' \
  --data-raw '{
    "promql": "avg(groundcover_cpu_usage_percent) by (cluster)",
    "start": "2025-10-21T08:00:00.000Z",
    "end": "2025-10-21T09:00:00.000Z", 
    "step": "1m"
  }'
```

#### Query Optimization

1. **Use appropriate time ranges**: Avoid querying excessively large time ranges
2. **Choose optimal step sizes**: Balance resolution with performance
3. **Filter early**: Use label filters to reduce data volume
4. **Aggregate wisely**: Use grouping to reduce cardinality

#### Time Handling

1. **Use RFC3339 format**: Always use ISO 8601 timestamps
2. **Account for timezones**: Timestamps are in UTC
3. **Align step boundaries**: Choose steps that align with data collection intervals
4. **Handle clock skew**: Allow for small time differences in distributed systems

#### Rate Limiting

* **Concurrent queries**: Limit concurrent requests to avoid overwhelming the API
* **Query complexity**: Complex queries may take longer and consume more resources
* **Data retention**: Historical data availability depends on your retention policy
