Ingest Data from S3 Buckets
This feature is only available for enterprise plan as part of inCloud Managed.
Many AWS services allow saving logs or other data in S3 buckets for long term storage. It can often be very useful to read that data into groundcover, which is what this page is all about.
groundcover uses a lambda function called groundcover-aws-ingester
that uses AWS trigger mechanism to run the lambda whenever a new file is put inside a bucket. This is a common way to utilize lambdas for these types of tasks. You can read more about here.
List of supported AWS services
The following list of services are supported for ingestion. Note the Displayed Source
column, which will be filterable in the platform under the source
filter.
It's possible to ingest arbitrary log data from S3 buckets, assuming the logs are separated by newlines. Data ingested in these cases will appear under the aws-s3
source.
AWS ELB
aws-elb
AWS CloudTrail
aws-cloudtrail
AWS S3
aws-s3
Installation
groundcover uses a provided CloudFormation
stack to deploy the lambda function. This is the easiest and recommended method to deploy the lambda function. It also takes care of adding granular permissions on the required resources such as access to the buckets and secret (if configured).
The lambda needs to be deployed inside the target account and region where the S3 buckets reside. If you have multiple accounts or regions, you will need to setup the lambda in each one of them.
We support multiple ways of deploying the stack:
Required configuration
When setting up the stack you will need to provide the following values:
Endpoint details
Choose one of the options below to configure the groundcover endpoint details. Using environment is more simple and requires no other configuration, while using a secret requires creation of the secret beforehand.
GCAPIKey - the API key used to ingest data into groundcover. Can be retrieved using:
groundcover auth print-api-key
GCSite - your inCloud Managed site, provided to you during installation. Example:
example.platform.grcv.io
Target buckets
Specifiying the BucketARNs does not complete the process; you will also need to add triggers on them, as described below.
BucketARNs - a comma-separated list of buckets that you wish to consume logs from. Example:
arn:aws:s3:::my_awesome_bucket,arn:aws:s3:::my_other_awesome_bucket
Additional attributes
GCEnvName (optional) - if provided, logs collected will be tagged with this value as
env
, making them filterable in the platform accordingly.
Installing using the AWS Console
Make sure to choose the correct region after opening the CloudFormation template.
You will need to install the CloudFormation
stack on each account and region where you want to ingest logs from.
There are two ways to install using CloudFormation:
Click on this link to jump directly into the installation
Fill in the configuration parameters detailed above
Continue clicking
Next
to create the stack
Installing using Terraform
(Optional) Creating a secret
This step is only needed if you prefer to use AWS Secret Manager to store the groundcover endpoint details. If you wish to pass them by env, skip to the next step.
Creating the CloudFormation stack
See above section on the relevant configuration parameters
groundcover uses the Terraform resource aws_cloudformation_stack to deploy the CloudFormation stack directly inside Terraform.
Updating the CloudFormation stack
This step is not needed on initial deployment; It's only relevant when a new version of the lambda function is released and you wish to upgrade to it. If this is your first time deploying the function, move on to Adding triggers
Access the existing stack created in the previous steps.
Click on
Update
in the top right cornerSelect
Replace existing template
and provide this link asAmazon S3 URL
: https://groundcover-public-cloudformation-templates.s3.us-east-1.amazonaws.com/CF_groundcover_aws_ingester.yamlClick
Next
once again to retain the existing configuration valuesContinue clicking
Next
to update the stack
Adding triggers
You can only set triggers on buckets that reside in the same region as the lambda function. If you have buckets in multiple regions, you will need to install the lambda in each one.
You will need to provide triggers on each bucket specified in the configuration.
After deploying the lambda function, you will need to add triggers on the S3 buckets you want to read logs from. groundcover supports multiple methods of setting up the triggers:
Setting up triggers using the AWS Console
Access the groundcover-aws-ingester
lambda function in the UI, and browse to the Add Trigger
section.
Make sure to keep the default value of Event types
- All object create events
Click on Add
to create the trigger.
Setting up triggers using Terraform
groundcover uses the Terraform resource aws_s3_bucket_notification as seen below:
Viewing the logs
Access the groundcover platform logs page to view your logs. You can filter based on the source of the data (see table) or based on the env name, if set.
Last updated