LogoLogo
Log in|Playground
  • Welcome
    • Introduction
    • FAQ
  • Capabilities
    • Log Management
    • Infrastructure Monitoring
    • Application Performance Monitoring (APM)
      • Application Metrics
      • Traces
      • Supported Technologies
    • Real User Monitoring (RUM)
  • Getting Started
    • Requirements
      • Kubernetes requirements
      • Kernel requirements for eBPF sensor
      • CPU architectures
      • ClickHouse resources
    • Installation & updating
    • Connect Linux hosts
    • Connect RUM
    • 5 quick steps to get you started
  • Use groundcover
    • Monitors
      • Create a new Monitor
      • Issues page
      • Monitor List page
      • Silences page
      • Monitor Catalog page
      • Monitor YAML structure
      • Embedded Grafana Alerts
        • Create a Grafana alert
    • Dashboards
      • Create a dashboard
      • Embedded Grafana Dashboards
        • Create a Grafana dashboard
        • Build alerts & dashboards with Grafana Terraform provider
        • Using groundcover datasources in a Self-hosted Grafana
    • Insights
    • Explore & Monitors query builder
    • Workflows
      • Create a new Workflow
      • Workflow Examples
      • Alert Structure
    • Search & Filter
    • Issues
    • Role-Based Access Control (RBAC)
    • Service Accounts
    • API Keys
    • Log Patterns
    • Drilldown
    • Scraping custom metrics
      • Operator based metrics
      • kube-state-metrics
      • cadvisor metrics
    • Backup & Restore Metrics
    • Metrics & Labels
    • Add custom environment labels
    • Configuring Pipelines
      • Writing Remap Transforms
      • Logs Pipeline Examples
      • Traces Pipeline Examples
      • Logs to Events Pipeline Examples
      • Logs/Traces Sensitive Data Obfuscation
      • Sensitive Data Obfuscation using OTTL
      • Log Filtering using OTTL
    • Querying your groundcover data
      • Query your logs
        • Example queries
        • Logs alerting
      • Query your metrics
      • Querying you data using an API
      • Using KEDA autoscaler with groundcover
  • Log Parsing with OpenTelemetry Pipelines
  • Log and Trace Correlation
  • RUM
  • Customization
    • Customize deployment
      • Agents in host network mode
      • API Key Secret
      • Argo CD
      • On-premise deployment
      • Quay.io registry
      • Configuring sensor deployment coverage
      • Enabling SSL Tracing in Java Applications
    • Customize usage
      • Filtering Kubernetes entities
      • Custom data retention
      • Sensitive data obfuscation
      • Custom storage
      • Custom logs collection
      • Custom labels and annotations
        • Enrich logs and traces with pod labels & annotations
        • Enrich metrics with node labels
      • Disable tracing for specific protocols
      • Tuning resources
      • Controlling the eBPF sampling mechanism
  • Integrations
    • Overview
    • Workflow Integrations
      • Slack Webhook Integration
      • Opsgenie Integration
      • Webhook Integration
        • Incident.io
      • PagerDuty Integration
      • Jira Webhook Integration
    • Data sources
      • OpenTelemetry
        • Traces & Logs
        • Metrics
      • Istio
      • AWS
        • Ingest CloudWatch Metrics
        • Ingest CloudWatch Logs
        • Ingest Logs Stored on S3
        • Integrate CloudWatch Grafana Datasource
      • GCP
        • Ingest Google Cloud Monitoring Metrics
        • Stream Logs using Pub/Sub
        • Integrate Google Cloud Monitoring Grafana Datasource
      • Azure
        • Ingest Azure Monitor Metrics
      • DataDog
        • Traces
        • Metrics
      • FluentBit
      • Fluentd
      • JSON Logs
    • 3rd-party metrics
      • ActiveMQ
      • Aerospike
      • Cassandra
      • CloudFlare
      • Consul
      • CoreDNS
      • Etcd
      • HAProxy
      • Harbor
      • JMeter
      • K6
      • Loki
      • Nginx
      • Pi-hole
      • Postfix
      • RabbitMQ
      • Redpanda
      • SNMP
      • Solr
      • Tomcat
      • Traefik
      • Varnish
      • Vertica
      • Zabbix
    • Source control (Gitlab/Github)
  • Architecture
    • Overview
    • inCloud Managed
      • Setup inCloud Managed with AWS
        • AWS PrivateLink Setup
        • EKS add-on
      • Setup inCloud Managed with GCP
      • Setup inCloud Managed with Azure
      • High Availability
      • Disaster Recovery
      • Ingestion Endpoints
      • Deploying in Sensor-Only mode
    • Security considerations
      • Okta SSO - onboarding
    • Service endpoints inside the cluster
  • Product Updates
    • What's new?
    • Earlier updates
      • 2025
        • Mar 2025
        • Feb 2025
        • Jan 2025
      • 2024
        • Dec 2024
        • Nov 2024
        • Oct 2024
        • Sep 2024
        • Aug 2024
        • July 2024
        • May 2024
        • Apr 2024
        • Mar 2024
        • Feb 2024
        • Jan 2024
      • 2023
        • Dec 2023
        • Nov 2023
        • Oct 2023
Powered by GitBook
On this page
  • List of supported AWS services
  • Installation
  • Required configuration
  • Installing using the AWS Console
  • Installing using Terraform
  • Updating the CloudFormation stack
  • Adding triggers
  • Viewing the logs
Export as PDF
  1. Integrations
  2. Data sources
  3. AWS

Ingest Logs Stored on S3

Last updated 2 months ago

This feature is only available for enterprise plan as part of .

Many AWS services allow saving logs or other data in S3 buckets for long term storage. It can often be very useful to read that data into groundcover, which is what this page is all about.

groundcover uses a lambda function called groundcover-aws-ingester that uses AWS trigger mechanism to run the lambda whenever a new file is put inside a bucket. This is a common way to utilize lambdas for these types of tasks. You can read more about .

List of supported AWS services

The following list of services are supported for ingestion. Note the Displayed Source column, which will be filterable in the platform under the source filter.

It's possible to ingest arbitrary log data from S3 buckets, assuming the logs are separated by newlines. Data ingested in these cases will appear under the aws-s3 source.

Service Name
Displayed Source

AWS ELB

aws-elb

AWS CloudTrail

aws-cloudtrail

AWS S3

aws-s3

Installation

groundcover uses a provided CloudFormation stack to deploy the lambda function. This is the easiest and recommended method to deploy the lambda function. It also takes care of adding granular permissions on the required resources such as access to the buckets and secret (if configured).

The lambda needs to be deployed inside the target account and region where the S3 buckets reside. If you have multiple accounts or regions, you will need to setup the lambda in each one of them.

We support multiple ways of deploying the stack:

Required configuration

When setting up the stack you will need to provide the following values:

Endpoint details

Choose one of the options below to configure the groundcover endpoint details. Using environment is more simple and requires no other configuration, while using a secret requires creation of the secret beforehand.

  • GCAPIKey - the API key used to ingest data into groundcover. Can be retrieved using: groundcover auth print-api-key

  • GCSite - your inCloud Managed site, provided to you during installation. Example: example.platform.grcv.io

  • GCSecretARN - if provided, this secret will be read in order to obtain the GCAPIKeyand GCSite. The secret is expected to contain a JSON in the following format:

{
    "site": "example.platform.grcv.io", 
    "apikey": "MY-API-KEY"
}

Target buckets

  • BucketARNs - a comma-separated list of buckets that you wish to consume logs from. Example: arn:aws:s3:::my_awesome_bucket,arn:aws:s3:::my_other_awesome_bucket

Additional attributes

  • GCEnvName (optional) - if provided, logs collected will be tagged with this value as env , making them filterable in the platform accordingly.

  • LambdaTimeout (optional) - The amount of time (in seconds) that Lambda allows a function to run before stopping it. The default is 30 seconds. The maximum allowed value is 900 seconds.

  • LambdaMemorySize (optional) - The amount of memory available to the function at runtime. Increasing the function memory also increases its CPU allocation. The default value is 128 MB. The value can be any multiple of 1 MB up to 10240 MB (e.g 128, 256, 512...)

Installing using the AWS Console

Make sure to choose the correct region after opening the CloudFormation template.

You will need to install the CloudFormation stack on each account and region where you want to ingest logs from.

There are two ways to install using CloudFormation:

  1. Fill in the configuration parameters detailed above

  2. Continue clicking Next to create the stack

  1. Click on Create stack -> With New Resources

  2. In Specify Template choose Amazon S3 URL and paste this link

  3. Fill in the configuration parameters detailed above

  4. Continue clicking Next to create the stack

Installing using Terraform

(Optional) Creating a secret

variable "groundcover_api_key" {
  type        = string
  description = "groundcover API key"
}

variable "groundcover_site" {
  type        = string
  description = "groundcover site"
}

locals {
  groundcover_endpoint_configuration = {
    apikey = var.groundcover_api_key
    site   = var.groundcover_site
  }
}

resource "aws_secretsmanager_secret" "groundcover_endpoint" {
  name        = "groundcover_endpoint"
  description = "groundcover endpoint configuration"
}

resource "aws_secretsmanager_secret_version" "groundcover_endpoint" {
  secret_id     = aws_secretsmanager_secret.groundcover_endpoint.id
  secret_string = jsonencode(local.groundcover_endpoint_configuration)
}

output "groundcover_endpoint_secret_arn" {
  value = aws_secretsmanager_secret.groundcover_endpoint.arn
}

Creating the CloudFormation stack

locals {
  groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
  stack_name                       = "groundcover-aws-ingester"
}

resource "aws_cloudformation_stack" "groundcover_aws_ingester" {
  name         = local.stack_name
  capabilities = ["CAPABILITY_IAM", "CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
  parameters   = {
    GCSecretARN      = "arn:aws:secretsmanager:::DEFAULT", # optional  required if not passing GCSite and GCAPIKey
    GCSite           = "example.platform.grcv.io",         # optional, required if not passing GCSecretARN
    GCAPIKey         = "api-key",                          # optional, required if not passing GCSecretARN
    BucketARNs       = join(",", formatlist("arn:aws:s3:::%s", local.groundcover_aws_ingester_buckets)),
    GCEnvName        = "",                                 # optional
    LambdaTimeout    = 30,                                 # optional
    LambdaMemorySize = 128,                                # optional
  }
  template_url = "https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yaml"
}

Updating the CloudFormation stack

  1. Access the existing stack created in the previous steps.

  2. Click on Update in the top right corner

  3. Click Next once again to retain the existing configuration values

  4. Continue clicking Next to update the stack

Adding triggers

After deploying the lambda function, you will need to add triggers.

groundcover supports multiple types of triggers:

SNS trigger

SNS triggers are added in two parts:

  • Adding notifications from the buckets to an SNS topic

  • Adding a trigger to the Lambda from the SNS topic

    • You will need to do this for each SNS topic

You can only configure S3 bucket to send events to SNS in the same region.

However, the SNS topic can be in a different region from the Lambda.

groundcover supports multiple methods of setting up the SNS trigger:

Setting up SNS triggers using the AWS Console

Adding the SNS trigger to the lambda

  1. Access the groundcover-aws-ingester lambda function in the UI, and browse to the Add Trigger section.

    1. Select SNS as the trigger type.

    2. Enter the SNS topic to trigger the lambda on.

    3. Click on Add.

  1. On each bucket, create an event notification to the SNS topic

    1. Select All object create events as the Event types

    2. Select the SNS topic from (1) as the destination

    3. Click on Save changes

Setting up SNS triggers using Terraform

locals {
  groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
  stack_name                       = "groundcover-aws-ingester"
}

resource "aws_sns_topic" "s3_events" {
  name = "s3_events_topic"
  policy = <<-POLICY
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Action": [
        "SNS:Publish"
      ],
      "Resource": "*"
    }
  ]
}
  POLICY
}

data "aws_lambda_function" "groundcover_aws_ingester" {
  function_name = "GroundcoverAWSIngester-${local.stack_name}"

  depends_on = [aws_cloudformation_stack.groundcover_aws_ingester]
}

resource "aws_lambda_permission" "groundcover_aws_ingester_allow_sns" {
  statement_id  = "AllowExecutionFromSNS"
  action        = "lambda:InvokeFunction"
  function_name = data.aws_lambda_function.groundcover_aws_ingester.arn
  principal     = "sns.amazonaws.com"
  source_arn    = aws_sns_topic.s3_events.arn
}

resource "aws_s3_bucket_notification" "groundcover_aws_ingester_bucket_notification" {
  for_each = toset(local.groundcover_aws_ingester_buckets)
  
  bucket   = each.key

  topic {
    topic_arn     = aws_sns_topic.s3_events.arn
    events        = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.groundcover_aws_ingester_allow_sns]
}

resource "aws_sns_topic_subscription" "groundcover_aws_ingester" {
  topic_arn = aws_sns_topic.s3_events.arn
  protocol  = "lambda"
  endpoint  = data.aws_lambda_function.groundcover_aws_ingester.arn
}

S3 trigger

You can only set triggers on buckets that reside in the same region as the lambda function. If you have buckets in multiple regions, you will need to install the lambda in each one.

After deploying the lambda function, you will need to add triggers on the S3 buckets you want to read logs from. groundcover supports multiple methods of setting up the triggers:

Setting up S3 triggers using the AWS Console

Access the groundcover-aws-ingester lambda function in the UI, and browse to the Add Trigger section.

Make sure to keep the default value of Event types - All object create events

Click on Addto create the trigger.

Setting up S3 triggers using Terraform

locals {
  groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
  stack_name                       = "groundcover-aws-ingester"
}

data "aws_lambda_function" "groundcover_aws_ingester" {
  function_name = "GroundcoverAWSIngester-${local.stack_name}"

  depends_on = [aws_cloudformation_stack.groundcover_aws_ingester]
}

resource "aws_lambda_permission" "groundcover_aws_ingester_allow_bucket" {
  for_each = toset(local.groundcover_aws_ingester_buckets)

  statement_id  = "AllowExecutionFromS3Bucket-${each.key}"
  action        = "lambda:InvokeFunction"
  function_name = data.aws_lambda_function.groundcover_aws_ingester.arn
  principal     = "s3.amazonaws.com"
  source_arn    = "arn:aws:s3:::${each.key}"
}

resource "aws_s3_bucket_notification" "groundcover_aws_ingester_bucket_notification" {
  for_each = toset(local.groundcover_aws_ingester_buckets)
  
  bucket   = each.key

  lambda_function {
    lambda_function_arn = data.aws_lambda_function.groundcover_aws_ingester.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.groundcover_aws_ingester_allow_bucket]
}

Viewing the logs

Specifiying the BucketARNs does not complete the process; you will also need to add triggers on them, as described .

Click on to jump directly into the installation

This step is only needed if you prefer to use AWS Secret Manager to store the groundcover endpoint details. If you wish to pass them by env, skip to the

See on the relevant configuration parameters

groundcover uses the Terraform resource to deploy the CloudFormation stack directly inside Terraform.

This step is not needed on initial deployment; It's only relevant when a new version of the lambda function is released and you wish to upgrade to it. If this is your first time deploying the function, move on to

Select Replace existing template and provide this link as Amazon S3 URL:

You will need to do this for each bucket specified in the

groundcover uses the Terraform resource as seen below:

You will need to provide triggers on each bucket specified in the .

groundcover uses the Terraform resource as seen below:

Access the groundcover platform to view your logs. You can filter based on the source of the data (see ) or based on the env name, if set.

this link
https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yaml
aws_cloudformation_stack
https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yaml
aws_s3_bucket_notification
aws_s3_bucket_notification
inCloud Managed
here
Using the AWS Console
Using Terraform
below
next step.
above section
Adding triggers
SNS trigger
S3 trigger
configuration
Using the AWS Console
Using Terraform
configuration
Using the AWS Console
Using Terraform
logs page
table