Serving images through AWS Api Gateway from Serverless Lambda_proxy function

In another post I mentioned the neat features now available in AWS: Serverless templates.

As part of an experiment I thought it would be interesting to see how you could put a light security layer over S3 so that media requests stream from S3 if a user logs in.

The WebApi template  already ships with an S3 proxy controller. Tweaking this slightly allowed me to stream or download the image:

The issue I ran into was accessing images via the endpoint you then get in API Gateway – images were getting encoded so wouldn’t render in the browser.

The solution:

  • In the settings for your api gateway, add the Binary Media Type: */*
  • On the {proxy+} method ‘Method Response’ add a 200 response header and add a header Content-Type. Finally publish your api

The second step here may not be necessary but I found the */* didn’t kick in until I made the change.

Getting personal with Alexa

It’s only a couple weeks now until Sugcon Europe – a definite highlight for any budding Sitecore developers. There are two days of amazing sessions lined up from a mixture of Sitecore employees and community members.

This year I’ve put together a talk all about integrating different channels with Sitecore – in this case Alexa. What better way to demonstrate the concept than to build a skill.

If you want to find out about the sessions, speakers and so on, you can download the skill for free at: https://www.amazon.co.uk/dp/B07C35NBYF/ref=sr_1_1

My particular favourite intent: play the speaker lottery.

It highlights some interesting challenges around creating chat interfaces, each of which will be covered in my talk at Sugcon. To hear the full talk swing by the Main Stage around 13:45 on Tuesday 🙂

A couple teasers:

  • Context is king and why does yes matter so much?
  • Why personalizing the content can have such positive, or negative results

All the source code is available at https://bitbucket.org/boro2g/sugconalexa including a crude scraper to gather the info it needs from the Sugcon site.

PUB SUB in AWS Lambda via SNS using C#

AWS Lambda’s are a great replacement for things like Windows Services which need to run common tasks periodically. A few examples would be triggering scheduled backups or polling urls.

You can set many things as the trigger for a lambda, for scheduled operations this can be a CRON value triggered from a CloudWatch event. Alternatively lambda’s can be triggered via a subscription to an SNS topic.

Depending on the type of operation you want to perform on a schedule you might find it takes longer than the timeout restriction imposed by AWS. If that’s the case then a simple PUB SUB (publisher, subscriber) configuration should help.

Sample scenario

We want to move databases backups between 2 buckets in S3. There are several databases to copy, each of which being a large file.

In one lambda you can easily find all the files to copy but if you also try to copy them all at some point your function will timeout.

Pub sub to the rescue

Why not setup 2 lambda functions? One as the publisher and one as the subscriber, and then glue the two together with an SNS topic (Simple Notification Service)

The publisher

Typically this would be triggered from a schedule and would look to raise events for each operations. Lets assume we use a simple POCO for converying the information we need:

The batching can be ignored if needs be – in this scenario this allows multiple urls to be handled by one subscriber.

The subscriber

Next we need to listen for the messages – you want to configure the subscriber function to have an SNS trigger that uses the same topic you posted to before.

Debugging things
You can either run each function on demand and see any output directly in the Lambda test window, or dig into your cloudwatch logs for each function.

AWS Lambda now supports Serverless applications including WebApi

One of the most exciting areas that I’ve seen emerging in the Cloud space recently is Serverless computing. Both AWS and Azure have their own flavour: AWS Lambda and Azure Functions.

An intro into Serverless

It really does what it says on the tin. You can run code but without dedicated infrastructure that you host. A good example is when building Alexa Skills.

You create AWS lambda function, in most of the languages of your choice, and then deploy into the cloud. Whenever someone uses your skill the lambda gets invoked and returns the content you need.

Behind the scenes AWS host your function in a container, if it receives traffic the container remains hot. If it doesn’t receive traffic its ‘frozen’. There is a very good description of this at https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-and-in-memory-caching-c9cd0844e072

Your language of choice

AWS Lambda supports a raft of languages: Python, Node, Java, .net core and others. Recently this has been upgraded so that it supports .net core 2.

Doing the legwork

With a basic lambda function you can concoct different handlers (methods) which respond to requests. This allows one lambda to service several endpoints. However, you need to do quite a lot of wiring and it doesn’t feel quite like normal WebApi programming.

Enter the serverless applications

This came right out the blue, but was very cool – Amazon released some starter kits that allow you run both RazorPage and WebApi applications in Lambdas!!! https://aws.amazon.com/blogs/developer/serverless-asp-net-core-2-0-applications/

Woah, you can write normal WebApi and deploy into a lambda. That is big.

Quick, migrate all the things

So I tried this. And in the most part everything worked pretty seamlessly. All the code I’d already written easily mapped into WebApi controllers I could then run locally. Tick.

Deploying was simple, either via Visual Studio or the dotnet lambda tools. Tick.

Using the serverless.template that ships with the starter pack it even setup my Api Gateway. Tick.

Dependency injection thats inherently available in .net core all worked. Tick.

WebApi attribute routing all works. Tick.

So far so good right 🙂

What I haven’t quite cracked yet?

In my original deployment (pre WebApi) I was using API level caching over a couple specific endpoints. This was path based as it was for specific methods. The new API Gateway deployment directs all traffic to a {/proxy+} url in order to route any request to the routing in your WebApi. If you turn caching on here, its a bit of a race, whichever url is hit first will fill the cache for all requests. Untick!

Debugging errors locally don’t always bubble startup errors very well. I have a feeling this isn’t anything Amazon related but is something worth being aware of. E.g. if you mess up your DI, it takes some ctor null debugging to find the cause. Untick.

Summary

I was hugely impressed with WebApi integration. Once the chinks in the path based caching at the API Gateway can get ironed out I’d consider this a very good option for handling API requests.

Watch this space 🙂

Copying large files between S3 buckets

There are many different scenarios you might face where you need to migrate data between S3 buckets, and folders. Depending on the use case you have several options for the language to select.

  • Lambda’s – this could be Python, Java, JavaScript or C#
  • Bespoke code – again, this could be any language you select. To keep things different from above, lets add Powershell to the mix

Behind the scenes a lot of these SDK’s call into common endpoints Amazon host. As a user you don’t really need to delve too deeply into the specific endpoints unless you really need to.

Back to the issue at hand – copying large files
Amazon impose a limit of roughly 5GB on regular copy operations. If you use the standard copy operations you will probably hit exceptions when the file sizes grow.

The solution is to use the multipart copy. It sounds complex but all the code is provided for you:

Python
This is probably the easiest to do as the boto3 library already does this. Rather than using copy_object, the copy function already handles multipart uploads: http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.copy

C#
The C# implementation is slightly more complex however Amazon provide good worked examples: https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjctsUsingLLNetMPUapi.html

Powershell
A close mimic of the C# implementation – someone has ported the C# example into powershell: https://stackoverflow.com/a/32635525/1065332

Happy copying!

Serving personalized content as JSON from Sitecore

As with many tools and approaches to solving technical issues, you can often find many ways to achieve the same output. The challenge I was up against was how to serve personalized content as JSON when being served from Sitecore.

The solution below is one way you can achieve this, undoubtedly there are many more!

The over-arching setup

Think of your JSON feed like a Sitecore page. You will need to break a rule of REST as Sitecore personalization requires session, and therefore isn’t stateless. This will need to be reflected in your consuming app, if you don’t provide an identifier every request it will need to understand and persist cookies between requests.

First up you need to select a device, a layout and some renderings. None of this differs to normal Sitecore development. I’ve found for debugging purposes using a new device works better as you can view the content as a web page as well as JSON.

The layout

Assuming you’ve decided on a device, you will need to setup a layout:

The renderings
Again, much like you would for a page, you can create e.g. Controller Renderings which output the content as you need. One thing to note, these will want to render as JSON e.g.:

These components can have datasources setup as normal, and hence personalization is available into your JSON feed.

Via a browser you would then load the url as normal, remembering to specify the device you’ve selected and you should see content based off rules, user information and behaviours and more.

Taking it to the next level
The simpler approach assumes you have one component per page. Complexity comes in when you need to generate valid JSON based off multiple controls – this can be achieved but requires you to either configure things via rendering parameters or at the point the page is rendered, interrogate the counterpart presentation components and work out whether you have sibling controls.

If you find you have sibling components to render you’d need to add commas after your controls to ensure valid JSON.

Who makes the tea? Why not ask Alexa?

Following on from the previous post, AlexaCore, this post will explain some of the challenges you might encounter when launching Alexa Skills into the store. It will also cover some cool things you can do if you want to enrich the feedback users receive as they use their skill.

Time for a brew?

If you need to solve your debate of who makes the tea why not check out Tea Round, recently enabled on the Skill Store.


First up, how can you find skills?

They are all available on the Amazon site, or accessible via the alexa.amazon.com portal.

Running test versions of your skill

This is pretty straight forwards. You need to login to developer.amazon.com, run through the wizard to create a skill which includes pairing up with an AWS Lambda (or controller endpoint). You should then see the newly created skill in your Alexa app but marked with a ‘dev’ flag.

Testing your skill

You have a few options, either you can simply talk to your Alexa with voice commands or use the text test tool within the developer.amazon.com console.

Getting certified gotcha’s

The certification process looks to validate and check a few things:

  • Are there bugs in the skill?
  • Do the descriptions and prompts align with your skill’s intents?
  • Do you leave the users hanging?

Things that caught me out during this process were:

  • Testing the skill where you skip past the launch intent
    • E.g. rather than asking ‘Alexa, open the tea round’ and then allowing the LaunchIntent to run, you can ask ‘Alexa, open the tea round and spin the wheel’. My logic around initializing the session originally ran in the LaunchIntent, some simple refactoring resolved this.
  • Leaving the users hanging – in my opinion this isn’t great UX but rules are rules
    • If you respond to a user without a prompt e.g. without a request of the user, the rules define you should end the session. My AddIntent would respond to ‘Add Nick’ with ‘Ok, Nick is now in the team’. To get past the certification it needed updating to ‘Ok, Nick is now in the team. Why not spin the wheel?’
  • Make sure the suggested prompts you include match up to the text set in your intents. The best bet here is to look at other skills and see how they phrase the prompts.

Saving data beyond a session

Much like a session in a web request, for the context of the lifetime of a skill a session gets persisted. This can be used to store anything you want, some simple examples would be an array of the names of the people in the Tea Round. That’s fine, but next time someone loads the skill, the session will be empty and the user will need to re-add each user – with all the extra questioning needed for certification this could get painful.

AWS provide a document model DB, Dynamo, that’s very well suited to this kind of thing. The Tea Round stores the permanent team in Dynamo, updated from the Lambda function that sits behind the skill.

Understanding users names

This can be tricky as subtle variations behind names can lead to them being spelt, and pronounced very differently – especially when regional dialect comes into play. The best success I’ve found is to provide as broad a set of example names as possible when setting up your {slots} in the intent.

Enriching the responses

A typical request & response cycle will send information that Amazon decodes from speech into your Lambda function. From there you can then return text that gets read out by your Alexa. By using Sitecore as a headless backend, the response text can be driven from the CMS – some recent updates to AlexaCore provide helpers for making these requests.

Where things then get interesting are that you can personalize messaging based off behaviour and user interactions. Big brother is watching?!?!

Closing thoughts

Gasp, after all that I’m thirsty – time for a brew! (how English eh :))

If you fancy allowing Alexa into your tea making process, have a look at Tea Round

AlexaCore – a c# diversion into writing Alexa skills

Following the recent Amazon Prime day, I thought it was time to jump on the home assistant bandwagon – £80 seemed a pretty good deal for an Alexa Echo.

If you’ve not tried writing Alexa skills there are some really good blog posts to help get started at: http://timheuer.com/blog/archive/2016/12/12/amazon-alexa-skill-using-c-sharp-dotnet-core.aspx

Skills can be underpinned by an AWS lambda function. In experimenting with writing these I’ve started putting together some helpers which remove a lot of the boiler-plate code needed for C# Alexa Lambda functions, including some fluent tools for running unit tests.

The code and some examples are available at https://github.com/boro2g/AlexaCore. Hopefully you will find them help get your skills off the ground!

Make request with fiddler based off a timer

Fiddler is a great tool for debugging web requests. Things like the composer section allow you to concoct requests and then test out the responses you get. If you’re using composer and look at the raw view you will see something like:

If you want to run this request every N seconds you can setup a quick script to achieve this:

The full script will be shown below, you need to simply add into the class Handlers code, save the script and then use the new menu options that get added to the Tools menu (Request by Timer, Stop Timer and Request Once):

One thing to note, in your script ensure you keep the trailing \r\n\r\n in the request url otherwise you’ll receive an error.

Enjoy

Log aggregation in AWS – part 3 – enriching the data sent into ElasticSearch

This is the third, and last part of the series that details how to aggregate all your log data within AWS. See Part 1 and Part 2 for getting started and keeping control of the size of your index.

By this point you should have a self sufficient ElasticSearch domain running that pools logs from all the CloudWatch log groups that have been configured with the correct subscriber.

The final step will be how we can enrich the data being sent into the index?

By default AWS will set you up a lambda function that extracts information from the CloudWatch event. It will contain things like: instanceId, event timestamp, the source log group and a few more. This is handled in the lambda via:

Note, a tip around handling numeric fields – in order for ElasticSearch to believe fields are numbers rather than strings you can multiply the value by 1 e.g.: source[‘@fieldName’] = 1*value;

What to enrich the data with?

This kind of depends on your use case. As we were aggregating logs from a wide range of boxes, applications and services, we wanted to enrich the data in the index with the tags applied to each instance. This sounds simple in practice but needed some planning around how to access the tags for each log entry – I’m not sure AWS would look kindly on making 500,000 API requests in 15 mins!

Lambda and caching

Lambda functions are a very interesting offering and I’m sure you will start to see a lot more use cases for them over the next few years. One challenge they bring is they are stateless – in our scenario we need to provide a way of querying an instance for its tags based of its InstanceId. Enter DynamoDb, another AWS service that provides scalable key-value pair storage.

Amazon define Dynamo as: Amazon DynamoDB is a fully managed non-relational database service that provides fast and predictable performance with seamless scalability.

Our solution

There were 2 key steps to the process:

  1. Updating dynamo to gather tag information from our running instances
  2. Updating the lambda script to pull the tags from dynamo as log entries are processed

1. Pushing instance data into Dynamo

Setup a lambda function that would periodically scan all running instances in our account and push the respective details into Dynamo.

  1. Setup a new Dynamo db table
    1. Named: kibana-instanceLookup
    2. Region: eu-west-1 (note, adjust this as you need)
    3. Primary partition key: instanceId (of type string)
      1. Note – we will tweak the read capacity units once things are up and running – for production we average about 50 
  2. Setup a new lambda function
    1. Named: LogsToElasticSearch_InstanceQueries_regionName 
    2. Add environment variable: region=eu-west-1
      1. Note, if you want this to pool logs from several regions into one dynamo setup a lambda function per region and set different environment variables for each. You can use the same trigger and role for each region
    3. Use the script shown below
    4. Set the execution timeout to be: 1 minute (note, tweak this if the function takes longer to run)
    5. Create a new role and give the following permissions:
      1. AmazonEC2ReadOnlyAccess (assign the OTB policy)
      2. Plus add the following policy:
        1. Note, the ### in the role wants to be your account id
    6. Setup a trigger within Cloudwatch -> Rules
      1. To run hourly, set the cron to be: 0 * * * ? *
      2. Select the target to be your new lambdas
        1. Note, you can always test your lambda by simply running on demand with any test event

And the respective script:

Note, if your dynamo runs in a different region to eu-west-1, update the first line of the pushInstanceToDynamo method and set the desired target region.

Running on demand should then fill your dynamo with data e.g.:

2. Querying dynamo when you process log entries

The final piece of the puzzle is to update the streaming function to query dynamo as required. This needs a few things:

  1. Update the role used for the lambda that streams data from CloudWatch into ElasticSearch

  2. where ### is your account id
  3. Update the lambda script setup in stage 1 and tweak as shown below

Add the AWS variable to the requires at the top of the file:

Update the exports.handler & transform methods and add loadFromDynamo to be:

The final step is to refresh the index definition within Kibana: Management -> Index patterns -> Refresh field list.

Final thoughts
There are a few things to keep an eye on as you roll this out – bear in mind these may need tweaking over time:

  • The lambda function that scans EC2 times out, if so, up the timeout
  • The elastic search index runs out of space, if so, adjust the environment variables used in step 2
  • The dynamo read capacity threshold hits its ceiling, if so increase the read capacity (this can be seen in the Metrics section of the table in Dynamo)

Happy logging!