Log aggregation in AWS – part 3 – enriching the data sent into ElasticSearch

This is the third, and last part of the series that details how to aggregate all your log data within AWS. See Part 1 and Part 2 for getting started and keeping control of the size of your index.

By this point you should have a self sufficient ElasticSearch domain running that pools logs from all the CloudWatch log groups that have been configured with the correct subscriber.

The final step will be how we can enrich the data being sent into the index?

By default AWS will set you up a lambda function that extracts information from the CloudWatch event. It will contain things like: instanceId, event timestamp, the source log group and a few more. This is handled in the lambda via:

Note, a tip around handling numeric fields – in order for ElasticSearch to believe fields are numbers rather than strings you can multiply the value by 1 e.g.: source[‘@fieldName’] = 1*value;

What to enrich the data with?

This kind of depends on your use case. As we were aggregating logs from a wide range of boxes, applications and services, we wanted to enrich the data in the index with the tags applied to each instance. This sounds simple in practice but needed some planning around how to access the tags for each log entry – I’m not sure AWS would look kindly on making 500,000 API requests in 15 mins!

Lambda and caching

Lambda functions are a very interesting offering and I’m sure you will start to see a lot more use cases for them over the next few years. One challenge they bring is they are stateless – in our scenario we need to provide a way of querying an instance for its tags based of its InstanceId. Enter DynamoDb, another AWS service that provides scalable key-value pair storage.

Amazon define Dynamo as: Amazon DynamoDB is a fully managed non-relational database service that provides fast and predictable performance with seamless scalability.

Our solution

There were 2 key steps to the process:

  1. Updating dynamo to gather tag information from our running instances
  2. Updating the lambda script to pull the tags from dynamo as log entries are processed

1. Pushing instance data into Dynamo

Setup a lambda function that would periodically scan all running instances in our account and push the respective details into Dynamo.

  1. Setup a new Dynamo db table
    1. Named: kibana-instanceLookup
    2. Region: eu-west-1 (note, adjust this as you need)
    3. Primary partition key: instanceId (of type string)
      1. Note – we will tweak the read capacity units once things are up and running – for production we average about 50 
  2. Setup a new lambda function
    1. Named: LogsToElasticSearch_InstanceQueries_regionName 
    2. Add environment variable: region=eu-west-1
      1. Note, if you want this to pool logs from several regions into one dynamo setup a lambda function per region and set different environment variables for each. You can use the same trigger and role for each region
    3. Use the script shown below
    4. Set the execution timeout to be: 1 minute (note, tweak this if the function takes longer to run)
    5. Create a new role and give the following permissions:
      1. AmazonEC2ReadOnlyAccess (assign the OTB policy)
      2. Plus add the following policy:
        1. Note, the ### in the role wants to be your account id
    6. Setup a trigger within Cloudwatch -> Rules
      1. To run hourly, set the cron to be: 0 * * * ? *
      2. Select the target to be your new lambdas
        1. Note, you can always test your lambda by simply running on demand with any test event

And the respective script:

Note, if your dynamo runs in a different region to eu-west-1, update the first line of the pushInstanceToDynamo method and set the desired target region.

Running on demand should then fill your dynamo with data e.g.:

2. Querying dynamo when you process log entries

The final piece of the puzzle is to update the streaming function to query dynamo as required. This needs a few things:

  1. Update the role used for the lambda that streams data from CloudWatch into ElasticSearch

  2. where ### is your account id
  3. Update the lambda script setup in stage 1 and tweak as shown below

Add the AWS variable to the requires at the top of the file:

Update the exports.handler & transform methods and add loadFromDynamo to be:

The final step is to refresh the index definition within Kibana: Management -> Index patterns -> Refresh field list.

Final thoughts
There are a few things to keep an eye on as you roll this out – bear in mind these may need tweaking over time:

  • The lambda function that scans EC2 times out, if so, up the timeout
  • The elastic search index runs out of space, if so, adjust the environment variables used in step 2
  • The dynamo read capacity threshold hits its ceiling, if so increase the read capacity (this can be seen in the Metrics section of the table in Dynamo)

Happy logging!

Leave a Reply

Your email address will not be published. Required fields are marked *