Ingest Logs Stored on S3
Many AWS services allow saving logs or other data in S3 buckets for long term storage. It can often be very useful to read that data into groundcover, which is what this page is all about.
groundcover uses a lambda function called groundcover-aws-ingester
that uses AWS trigger mechanism to run the lambda whenever a new file is put inside a bucket. This is a common way to utilize lambdas for these types of tasks. You can read more about here.
List of supported AWS services
The following list of services are supported for ingestion. Note the Displayed Source
column, which will be filterable in the platform under the source
filter.
It's possible to ingest arbitrary log data from S3 buckets, assuming the logs are separated by newlines. Data ingested in these cases will appear under the aws-s3
source.
AWS ELB
aws-elb
AWS CloudTrail
aws-cloudtrail
AWS S3
aws-s3
Installation
groundcover uses a provided CloudFormation
stack to deploy the lambda function. This is the easiest and recommended method to deploy the lambda function. It also takes care of adding granular permissions on the required resources such as access to the buckets and secret (if configured).
The lambda needs to be deployed inside the target account and region where the S3 buckets reside. If you have multiple accounts or regions, you will need to setup the lambda in each one of them.
We support multiple ways of deploying the stack:
Required configuration
When setting up the stack you will need to provide the following values:
Endpoint details
GCAPIKey - the Ingestion key of type
3rd Party
used to ingest data into groundcover. Can be generated from Settings -> Access -> Integration KeysGCSite - your inCloud Managed site, provided to you during installation. Example:
example.platform.grcv.io
- Note that it is hostname only, withouthttps
or any port or endpoint.
Target buckets
Specifiying the BucketARNs does not complete the process; you will also need to add triggers on them, as described below.
BucketARNs - a comma-separated list of buckets that you wish to consume logs from. Example:
arn:aws:s3:::my_awesome_bucket,arn:aws:s3:::my_other_awesome_bucket
Additional attributes
GCEnvName (optional) - if provided, logs collected will be tagged with this value as
env
, making them filterable in the platform accordingly.LambdaTimeout (optional) - The amount of time (in seconds) that Lambda allows a function to run before stopping it. The default is 30 seconds. The maximum allowed value is 900 seconds.
LambdaMemorySize (optional) - The amount of memory available to the function at runtime. Increasing the function memory also increases its CPU allocation. The default value is 128 MB. The value can be any multiple of 1 MB up to 10240 MB (e.g 128, 256, 512...)
Installing using the AWS Console
Make sure to choose the correct region after opening the CloudFormation template.
You will need to install the CloudFormation
stack on each account and region where you want to ingest logs from.
There are two ways to install using CloudFormation:
Click on this link to jump directly into the installation
Fill in the configuration parameters detailed above
Continue clicking
Next
to create the stack
Installing using Terraform
(Optional) Creating a secret
variable "groundcover_api_key" {
type = string
description = "groundcover Ingestion key"
}
variable "groundcover_site" {
type = string
description = "groundcover site"
}
locals {
groundcover_endpoint_configuration = {
apikey = var.groundcover_api_key
site = var.groundcover_site
}
}
resource "aws_secretsmanager_secret" "groundcover_endpoint" {
name = "groundcover_endpoint"
description = "groundcover endpoint configuration"
}
resource "aws_secretsmanager_secret_version" "groundcover_endpoint" {
secret_id = aws_secretsmanager_secret.groundcover_endpoint.id
secret_string = jsonencode(local.groundcover_endpoint_configuration)
}
output "groundcover_endpoint_secret_arn" {
value = aws_secretsmanager_secret.groundcover_endpoint.arn
}
Creating the CloudFormation stack
groundcover uses the Terraform resource aws_cloudformation_stack to deploy the CloudFormation stack directly inside Terraform.
locals {
groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
stack_name = "groundcover-aws-ingester"
}
resource "aws_cloudformation_stack" "groundcover_aws_ingester" {
name = local.stack_name
capabilities = ["CAPABILITY_IAM", "CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
parameters = {
GCSecretARN = "arn:aws:secretsmanager:::DEFAULT", # optional required if not passing GCSite and GCAPIKey
GCSite = "example.platform.grcv.io", # optional, required if not passing GCSecretARN
GCAPIKey = "Ingestion-key", # optional, required if not passing GCSecretARN
BucketARNs = join(",", formatlist("arn:aws:s3:::%s", local.groundcover_aws_ingester_buckets)),
GCEnvName = "", # optional
LambdaTimeout = 30, # optional
LambdaMemorySize = 128, # optional
}
template_url = "https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yaml"
}
Updating the CloudFormation stack
Access the existing stack created in the previous steps.
Click on
Update Stack -> Make a direct update
in the top right cornerSelect
Replace existing template
and provide this link asAmazon S3 URL
: https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yamlClick
Next
once again to retain the existing configuration valuesContinue clicking
Next
to update the stack

Adding triggers
After deploying the lambda function, you will need to add triggers.
groundcover supports multiple types of triggers:
SNS trigger
SNS triggers are added in two parts:
Adding notifications from the buckets to an SNS topic
You will need to do this for each bucket specified in the configuration
Adding a trigger to the Lambda from the SNS topic
You will need to do this for each SNS topic
You can only configure S3 bucket to send events to SNS in the same region.
However, the SNS topic can be in a different region from the Lambda.
groundcover supports multiple methods of setting up the SNS trigger:
Setting up SNS triggers using the AWS Console
Creating the SNS topic
If the SNS topic doesn't exist yet, create it in Amazon SNS > Topics
. Choose Standard
as the topic type and give it a name.
It is important to amend the Access Policy with a statement allowing S3 to publish data to the topic.
After creating the topic go to Edit
, and under Access Policy
, add the follwing to the list of statments in your policy, with a comma separating it from the other statments:
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "*"
}
For example, a simple policy might look like this:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "*"
},
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:Publish",
"SNS:RemovePermission",
"SNS:SetTopicAttributes",
"SNS:DeleteTopic",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:AddPermission",
"SNS:Subscribe"
],
"Resource": "",
"Condition": {
"StringEquals": {
"AWS:SourceOwner": <ACCOUNT ID>
}
}
}
]
}
Adding the SNS trigger to the lambda
Access the
groundcover-aws-ingester
lambda function in the UI, and browse to theAdd Trigger
section.Select SNS as the trigger type.
Enter the SNS topic to trigger the lambda on.
Click on Add.

On each bucket, create an event notification to the SNS topic
Select
All object create events
as theEvent types
Select the SNS topic from (1) as the destination
Click on Save changes


Setting up SNS triggers using Terraform
groundcover uses the Terraform resource aws_s3_bucket_notification as seen below:
locals {
groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
stack_name = "groundcover-aws-ingester"
}
resource "aws_sns_topic" "s3_events" {
name = "s3_events_topic"
policy = <<-POLICY
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "*"
}
]
}
POLICY
}
data "aws_lambda_function" "groundcover_aws_ingester" {
function_name = "GroundcoverAWSIngester-${local.stack_name}"
depends_on = [aws_cloudformation_stack.groundcover_aws_ingester]
}
resource "aws_lambda_permission" "groundcover_aws_ingester_allow_sns" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = data.aws_lambda_function.groundcover_aws_ingester.arn
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.s3_events.arn
}
resource "aws_s3_bucket_notification" "groundcover_aws_ingester_bucket_notification" {
for_each = toset(local.groundcover_aws_ingester_buckets)
bucket = each.key
topic {
topic_arn = aws_sns_topic.s3_events.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [aws_lambda_permission.groundcover_aws_ingester_allow_sns]
}
resource "aws_sns_topic_subscription" "groundcover_aws_ingester" {
topic_arn = aws_sns_topic.s3_events.arn
protocol = "lambda"
endpoint = data.aws_lambda_function.groundcover_aws_ingester.arn
}
S3 trigger
You can only set triggers on buckets that reside in the same region as the lambda function. If you have buckets in multiple regions, you will need to install the lambda in each one.
After deploying the lambda function, you will need to add triggers on the S3 buckets you want to read logs from. groundcover supports multiple methods of setting up the triggers:
Setting up S3 triggers using the AWS Console
Access the groundcover-aws-ingester
lambda function in the UI, and browse to the Add Trigger
section.

Click on Add
to create the trigger.
Setting up S3 triggers using Terraform
groundcover uses the Terraform resource aws_s3_bucket_notification as seen below:
locals {
groundcover_aws_ingester_buckets = ["my_awesome_bucket", "my_other_awesome_bucket"]
stack_name = "groundcover-aws-ingester"
}
data "aws_lambda_function" "groundcover_aws_ingester" {
function_name = "GroundcoverAWSIngester-${local.stack_name}"
depends_on = [aws_cloudformation_stack.groundcover_aws_ingester]
}
resource "aws_lambda_permission" "groundcover_aws_ingester_allow_bucket" {
for_each = toset(local.groundcover_aws_ingester_buckets)
statement_id = "AllowExecutionFromS3Bucket-${each.key}"
action = "lambda:InvokeFunction"
function_name = data.aws_lambda_function.groundcover_aws_ingester.arn
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::${each.key}"
}
resource "aws_s3_bucket_notification" "groundcover_aws_ingester_bucket_notification" {
for_each = toset(local.groundcover_aws_ingester_buckets)
bucket = each.key
lambda_function {
lambda_function_arn = data.aws_lambda_function.groundcover_aws_ingester.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [aws_lambda_permission.groundcover_aws_ingester_allow_bucket]
}
Viewing the logs
Access the groundcover platform logs page to view your logs. You can filter based on the source of the data (see table) or based on the env name, if set.
Last updated