<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AWS &#8211; blog.boro2g .co.uk</title>
	<atom:link href="https://blog.boro2g.co.uk/category/aws/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.boro2g.co.uk</link>
	<description>Some ideas about coding, dev and all things online.</description>
	<lastBuildDate>Wed, 27 Nov 2019 14:50:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.8</generator>
	<item>
		<title>Customizing logging in a C# dotnetcore AWS Lambda function</title>
		<link>https://blog.boro2g.co.uk/customizing-logging-in-a-c-dotnetcore-aws-lambda-function/</link>
					<comments>https://blog.boro2g.co.uk/customizing-logging-in-a-c-dotnetcore-aws-lambda-function/#comments</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Wed, 27 Nov 2019 14:47:16 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=1102</guid>

					<description><![CDATA[<p>One challenge we hit recently was how to build our dotnetcore lambda functions in a consistent way &#8211; in particular how would we approach logging. A pattern we&#8217;ve adopted is to write the core functionality for our functions so that it&#8217;s as easy to run from a console app as it is from a lambda. [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/customizing-logging-in-a-c-dotnetcore-aws-lambda-function/">Customizing logging in a C# dotnetcore AWS Lambda function</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>One challenge we hit recently was how to build our dotnetcore lambda functions in a consistent way &#8211; in particular how would we approach logging. </p>



<p>A pattern we&#8217;ve adopted is to write the core functionality for our functions so that it&#8217;s as easy to run from a console app as it is from a lambda. The lambda can then be considered only as the entry point to the functionality.</p>



<p><strong>Serverless Dependency injection</strong></p>



<p>I am sure there are different schools of thought here, should you use a container within a serverless function or not? For this post the design assumes you do make use of the Microsoft DependencyInjection libraries.</p>



<p><strong>Setting up your projects</strong></p>



<p>Based on the design mentioned above, ie you can run from functionality as easily from a Console App as you can a lambda, I often setup the following projects:</p>



<ul><li>Project.ActualFunctionality (e.g. SnsDemo.Publisher)</li><li>Project.ActualFunctionality.ConsoleApp (e.g. SnsDemo.Publisher.ConsoleApp)</li><li>Project.ActualFunctionality.Lambda (e.g. SnsDemo.Publisher.Lambda)</li></ul>



<p>The actual functionality lives in the top project and is shared with both other projects. Dependency injection, and AWS profiles are used to run the functionality locally.</p>



<p><strong>The actual functionality</strong></p>



<p>Let&#8217;s assume the functionality for your function does something simple like pushing messages into an SQS queue</p>



<pre class="crayon-plain-tag">public class SqsSender
{
    private readonly IAmazonSQS _amazonSQS;
    private readonly ILogger&lt;SqsSender&gt; _logger;

    public SqsSender(IAmazonSQS amazonSQS, ILogger&lt;SqsSender&gt; logger)
    {
        _amazonSQS = amazonSQS;
        _logger = logger;
    }

    public void SendMessage()
    {
        var message = new SendMessageRequest
        {
            QueueUrl = &quot;...&quot;,                 
        };
        
        message.MessageBody = $&quot;Message {Guid.NewGuid()}&quot;;
        
        _amazonSQS.SendMessageAsync(message).Wait();  

        _logger.LogInformation(&quot;_logger Messages sent&quot;);

        Console.WriteLine(&quot;Console Message(s) sent&quot;);
    }
}</pre>



<p><strong>The console app version</strong></p>



<p>It&#8217;s pretty simple to get DI working in a dotnetcore console app</p>



<pre class="crayon-plain-tag">static void Main(string[] args)
{
    IConfiguration config = new ConfigurationBuilder()
        .AddJsonFile(&quot;appsettings.json&quot;, true, true)
        .Build();

    var serviceProvider = new ServiceCollection()
        .AddSingleton(config)
        .AddSingleton&lt;SqsSender&gt;()
        .AddLogging(a =&gt;
        {
            a.AddConsole();
        })
        .AddAWSService&lt;IAmazonSQS&gt;()
        .BuildServiceProvider();

    serviceProvider.GetService&lt;SqsSender&gt;().SendMessage();
}</pre>



<p><strong>The lambda version</strong></p>



<p>This looks very similar to the console version</p>



<pre class="crayon-plain-tag">public string FunctionHandler(object input, ILambdaContext context)
{
    IConfiguration config = new ConfigurationBuilder()
            .AddJsonFile(&quot;lambdasettings.json&quot;, true, true)
            .Build();

    var serviceProvider = new ServiceCollection()
        .AddSingleton(config)
        .AddSingleton&lt;SqsSender&gt;()             
        .AddLogging(a =&gt; a.AddProvider(new CustomLambdaLogProvider(context.Logger)))
        .AddAWSService&lt;IAmazonSQS&gt;()
        .BuildServiceProvider();
   
    serviceProvider.GetService&lt;SqsSender&gt;().SendMessage();

    return &quot;...&quot;;
}</pre>



<p>The really interesting bit to take note of is: <strong>.AddLogging(a =&gt; a.AddProvider(new CustomLambdaLogProvider(context.Logger)))</strong></p>



<p>In the actual functionality we can log in many ways:</p>



<pre class="crayon-plain-tag">_logger.LogInformation(&quot;_logger Messages sent&quot;);

Console.WriteLine(&quot;Console Message(s) sent&quot;);</pre>



<p>To make things lambda agnostic I&#8217;d argue injecting <strong>ILogger&lt;Type&gt;</strong> and then <strong>_logger.LogInformation(&#8220;_logger Messages sent&#8221;);</strong> is the preferred option.</p>



<p><strong>Customizing the logger</strong></p>



<p>It&#8217;s simple to customize the dotnetcore logging framework &#8211; for this demo I setup 2 things. The CustomLambdaLogProvider and the CustomLambdaLogger.</p>



<pre class="crayon-plain-tag">internal class CustomLambdaLogProvider : ILoggerProvider
{
    private readonly ILambdaLogger _logger;

    private readonly ConcurrentDictionary&lt;string, CustomLambdaLogger&gt; _loggers = new ConcurrentDictionary&lt;string, CustomLambdaLogger&gt;();

    public CustomLambdaLogProvider(ILambdaLogger logger)
    {
        _logger = logger;
    }

    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, a =&gt; new CustomLambdaLogger(a, _logger));
    }

    public void Dispose()
    {
        _loggers.Clear();
    }
}</pre>



<p>And finally a basic version of the actual logger:</p>



<pre class="crayon-plain-tag">internal class CustomLambdaLogger : ILogger
{
    private string _categoryName;
    private ILambdaLogger _lambdaLogger;

    public CustomLambdaLogger(string categoryName, ILambdaLogger lambdaLogger)
    {
        _categoryName = categoryName;
        _lambdaLogger = lambdaLogger;
    }

    public IDisposable BeginScope&lt;TState&gt;(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        //todo - add logic around filtering log messages if desired
        return true;
    }

    public void Log&lt;TState&gt;(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func&lt;TState, Exception, string&gt; formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        _lambdaLogger.LogLine($&quot;{logLevel.ToString()} - {_categoryName} - {formatter(state, exception)}&quot;);
    }
}</pre>



<p><strong>Summary</strong></p>



<p>The aim here is to keep your application code agnostic to where it runs. Using dependency injection we can share core logic between any &#8216;runner&#8217; e.g. Lambda functions, Azure functions, Console App&#8217;s &#8211; you name it.</p>



<p>With some small tweaks to the lambda logging calls you can ensure the OTB lambda logger is still used under the hood, but your implementation code can make use of injecting things like <strong>ILogger&lt;T&gt;</strong> wherever needed <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/customizing-logging-in-a-c-dotnetcore-aws-lambda-function/">Customizing logging in a C# dotnetcore AWS Lambda function</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/customizing-logging-in-a-c-dotnetcore-aws-lambda-function/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Automating a multi region deployment with Azure Devops</title>
		<link>https://blog.boro2g.co.uk/automating-a-multi-region-deployment-with-azure-devops/</link>
					<comments>https://blog.boro2g.co.uk/automating-a-multi-region-deployment-with-azure-devops/#comments</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Fri, 18 Oct 2019 11:12:24 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Sitecore]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=1085</guid>

					<description><![CDATA[<p>For a recent project we&#8217;ve invested a lot of time into Azure Devops, and in the most part found it a very useful toolset for deploying our code to both Azure and AWS. When we started on this process, YAML pipelines weren&#8217;t available for our source code provider &#8211; this meant everything had to be [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/automating-a-multi-region-deployment-with-azure-devops/">Automating a multi region deployment with Azure Devops</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>For a recent project we&#8217;ve invested a lot of time into Azure Devops, and in the most part found it a very useful toolset for deploying our code to both Azure and AWS.</p>



<p>When we started on this process, YAML pipelines weren&#8217;t available for our source code provider &#8211; this meant everything had to be setup manually <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>However, recently this has changed <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> This post will run through a few ways you can optimize your release process and automate the whole thing. </p>



<p>First a bit of background and then some actual code examples.</p>



<p><strong>Why YAML</strong>?</p>



<p>Setting up your pipelines via the UI is a really good way to quickly prototype things, however what if you need to change these pipelines to mimic deployment features alongside code features. Yaml allows you to keep the pipeline definition in the same codebase as the actual features. You deploy branch XXX and that can be configured differently to branch YYY.</p>



<p>Another benefit, the changes are then visible in your pull requests so validating changes is a lot easier.</p>



<p><strong>Async Jobs</strong></p>



<p>A big optimization we gained was to release to different regions in parallel. Yaml makes this very easy by using Jobs &#8211; each job can run on an agent and hence push to multiple regions in parallel.</p>



<p><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml">https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml</a></p>



<p><strong>Yaml file templates</strong></p>



<p>If you have common functionality you want to duplicate, e.g. &#8216;Deploy to Eu-West-1&#8217;, templates are a good way to split your functionality. They allow you to group logical functionality you want to run multiple times.</p>



<p><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops">https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops</a></p>



<p><strong>Azure Devops rest API</strong></p>



<p>All of your build/releases can be triggered via the UI portal, however if you want to automate that process I&#8217;d suggest looking into the rest API. Via this you can trigger, monitor and administer builds, releases and a whole load more. </p>



<p>We use powershell to orchestrate the process.</p>



<p><a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-5.1">https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-5.1</a></p>



<p><strong>Variables, and variable groups</strong></p>



<p>I have to confess, this syntax feels slightly cumbersome, but it&#8217;s very possible to reference variables passed into a specific pipeline along with global variables from groups you setup in the <strong>Library </strong>section of the portal.</p>



<p><strong>Now, some examples</strong></p>



<p>The root YAML file:</p>



<pre class="crayon-plain-tag">pr: none
trigger: none

variables:
- group: 'DataDog' # reference Variable groups if needed
- name : 'system.debug'
  value: true
- name : 'DynamicParameter' # these can be calculated off other variable values
  value: &quot;name-$(EnvironmentName)-$(ColourName)&quot;
- name: 'WebsiteFolder'
  value: 'Website/FolderName'

#- name: &quot;EnvironmentName&quot; # see the rest api example below for how to pass in variables
#  value: &quot;Set externally&quot;
#- name: &quot;ColourName&quot;
#  value: &quot;Set externally&quot;
#- name: &quot;AwsCredentials&quot;
#  value: &quot;Set externally&quot;

jobs:
- job: Build
  pool:
    vmImage: 'windows-2019' # vmImages: https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent
  steps:  
 
  - task: NuGetToolInstaller@0
    displayName: 'Use NuGet 4.4.1'
    inputs:
      versionSpec: 4.4.1    

  - task: NuGetCommand@2 # if using secure artifacts, you can download them into a dotnetcore project this way
    displayName: 'NuGet restore'
    inputs:
      restoreSolution: 'Website/###.sln'
      feedsToUse: config
      nugetConfigPath: Website/nuget.config

  - task: Npm@1
    displayName: 'NPM install'
    inputs:
      workingDir: '$(WebsiteFolder)'     
      verbose: false

  - task: Npm@1
    displayName: 'NPM build scss'
    inputs:
      workingDir: '$(WebsiteFolder)'     
      command: custom
      verbose: false
      customCommand: 'run scss-build'

  - task: DotNetCoreCLI@2
    displayName: 'dotnet publish'
    inputs:
      command: publish
      publishWebProjects: false
      projects: '$(WebsiteFolder)/Website.csproj'
      arguments: '--configuration Release --output $(Build.ArtifactStagingDirectory)\Website'
      zipAfterPublish: false  

  - task: PublishPipelineArtifact@0 # in order to share the common build with multiple releases you need to publish the artifact
    inputs:
      artifactName: &quot;Website&quot;
      targetPath: '$(Build.ArtifactStagingDirectory)'

- job: ReleaseEU
  pool:
    vmImage: 'windows-2019'
  dependsOn: Build # these will only start when the 'Build' task above starts
  steps:
  - template: TaskGroups/DeployToRegion.yaml # this
    parameters:
      AwsCredentials: '$(AwsCredentials)'
      RegionName: 'eu-west-1'      
      EnvironmentName: '$(EnvironmentName)'
      ColourName: '$(ColourName)'
      DatadogApiKey: '$(DatadogApiKey)' # referenced from a variable group      

- job: ReleaseRegionN # Will run in parallel with ReleaseEU if you have enough build agents
  pool:
    vmImage: 'windows-2019'
  dependsOn: Build
  steps:
  - template: TaskGroups/DeployToRegion.yaml # this template file is shown below
    parameters:
      AwsCredentials: '$(AwsCredentials)'
      RegionName: 'ANother region'      
      EnvironmentName: '$(EnvironmentName)'
      ColourName: '$(ColourName)'
      DatadogApiKey: '$(DatadogApiKey)' # referenced from a variable group</pre>



<p>The &#8216;DeployToRegion&#8217; template:</p>



<pre class="crayon-plain-tag">parameters:
  AwsCredentials: ''
  RegionName: ''  
  EnvironmentName: ''
  ColourName: ''
  DatadogApiKey: ''  

steps:
- task: DownloadPipelineArtifact@1 # you can download artifacts from other builds if needed
  inputs:
      buildType: 'specific'
      project: 'Project Name'
      pipeline: '##'
      buildVersionToDownload: 'latest'
      artifactName: 'Devops'
      targetPath: '$(System.ArtifactsDirectory)/Devops'

- task: DownloadPipelineArtifact@1 # or download from the current one
  inputs:
      buildType: 'current'
      artifactName: 'Website'
      targetPath: '$(System.ArtifactsDirectory)'

- template: DeployToElasticBeanstalk.yaml # and can chain templates if needed
  parameters:
      AwsCredentials: '${{ parameters.AwsCredentials }}'
      RegionName: '${{ parameters.RegionName }}'      
      EnvironmentName: '${{ parameters.EnvironmentName }}'
      ColourName: '${{ parameters.ColourName }}'
      DatadogApiKey: '${{ parameters.DatadogApiKey }}'</pre>



<p>And finally some powershell to fire it all off:</p>



<pre class="crayon-plain-tag">### Example usage: .\TriggerBuild.ps1 -branch &quot;release/release-006&quot; -isReleaseCandidate $false -additionalReleaseParameters @{ &quot;EnvironmentName&quot; = &quot;qa&quot;; &quot;ColourName&quot; = &quot;blue&quot;; }

param (
    [Parameter(Mandatory = $true)][string]$branch,   
    [boolean]$isReleaseCandidate = $false,
    [HashTable]$additionalReleaseParameters = @{ }
)

$ErrorActionPreference = &quot;Stop&quot;

$authToken = Get-DevOpsAuthToken # see https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops for how to get a token
$accountName = &quot;AzureDevopsAccountName&quot; 
$projectName = &quot;AzureDevopsProjectName&quot;

$buildDefinitionIds = @(27) # the build pipeline id

Write-Host &quot;Building with settings:&quot;
Write-Host &quot;Branch: '$branch'&quot;
Write-Host &quot;Tag as 'release-candidate' and retain build: $isReleaseCandidate&quot;
Write-Host &quot;Build definition IDs: $buildDefinitionIds&quot;
Write-Host &quot;Additional parameters: $($additionalReleaseParameters | ConvertTo-Json) &quot;
Write-Host &quot;&quot;

$releaseIds = @()

$result = @{
    Success = $false;    
}

foreach ($definitionId in $buildDefinitionIds)
{
    $deploymentParams = @{
        &quot;definition&quot; = @{
            &quot;id&quot; = $definitionId;
        }
        &quot;sourceBranch&quot; = $branch;
    }

    if ($additionalReleaseParameters.GetEnumerator().length -gt 0)
    {
        $deploymentParams.parameters = $additionalReleaseParameters | ConvertTo-Json
    }

    $content = (Invoke-WebRequest -uri &quot;https://dev.azure.com/$accountName/$projectName/_apis/build/builds?api-version=4.1&quot; `
        -ContentType &quot;application/json&quot; -Headers (Get-DevOpsHeaders -AuthToken $authToken) -Method POST -Body ($deploymentParams | ConvertTo-Json)).Content | ConvertFrom-Json

    $releaseIds += $content.id  

    Write-Host &quot;Build $($content.id) queued: https://dev.azure.com/$accountName/$projectName/_build/results?buildId=$($content.id)&quot; -ForegroundColor Yellow
}

$aBuildFailed = $false

foreach ($releaseId in $releaseIds)
{
    $status = &quot;&quot;

    while ($status -ne &quot;completed&quot;)
    {
        try
        {
            $content = (Invoke-WebRequest -uri &quot;https://dev.azure.com/$accountName/$projectName/_apis/build/builds/$releaseId&quot; -Headers (Get-DevOpsHeaders -AuthToken $authToken)).Content | ConvertFrom-Json
        }
        catch
        {
            Write-Host &quot;  Error calling DevopsAPI. If this happens several times check the url: https://dev.azure.com/$accountName/$projectName/_apis/build/builds/$releaseId&quot; -ForegroundColor red
        }

        $status = $content.status

        Write-Host &quot; Build id $releaseId has status: $status&quot;

        if ($content.result -eq &quot;failed&quot; -or $content.result -eq &quot;canceled&quot;)
        {
            $aBuildFailed = $true

            Write-Host &quot;Build $releaseId failed - check https://dev.azure.com/$accountName/$projectName/_build/results?buildId=$releaseId for details&quot; -ForegroundColor Red
        }
        elseif ($content.result -eq &quot;completed&quot;)
        {
            Write-Host &quot;Build $releaseId completed successfully&quot; -ForegroundColor Green
        }

        Start-Sleep -s 5
    }

    if ($isReleaseCandidate -eq $true)
    {
        Write-Host &quot; Adding RC tags: release-candidate&quot;
        $tags = (Invoke-WebRequest -uri &quot;https://dev.azure.com/$accountName/$projectName/_apis/build/builds/$releaseId/tags/release-candidate?api-version=4.1&quot; -Headers (Get-DevOpsHeaders -AuthToken $authToken) -Method PUT).Content | ConvertFrom-Json

        Write-Host &quot; Adding retain build to $releaseId&quot;
        $updates = (Invoke-WebRequest -uri &quot;https://dev.azure.com/$accountName/$projectName/_apis/build/builds/$($releaseId)?api-version=4.1&quot; -ContentType &quot;application/json&quot; -Headers (Get-DevOpsHeaders -AuthToken $authToken) -Method PATCH -Body (@{&quot;retainedByRelease&quot; = $true } | ConvertTo-Json)).Content | ConvertFrom-Json
    }
}

$result.Success = !$aBuildFailed

return $result</pre>



<p>Happy deploying <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/automating-a-multi-region-deployment-with-azure-devops/">Automating a multi region deployment with Azure Devops</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/automating-a-multi-region-deployment-with-azure-devops/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>AWS Serverless template &#8211; inline policies</title>
		<link>https://blog.boro2g.co.uk/aws-serverless-template-inline-policies/</link>
					<comments>https://blog.boro2g.co.uk/aws-serverless-template-inline-policies/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Fri, 03 Aug 2018 11:05:15 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=926</guid>

					<description><![CDATA[<p>If you&#8217;ve worked with AWS Serverless templates, you&#8217;ll appreciate how quickly you can deploy a raft of infrastructure with very little template code. The only flaw I&#8217;ve found so far is the documentation is a bit tricky to find. Say you want to attach some custom policies to your function, you can simply embed them [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/aws-serverless-template-inline-policies/">AWS Serverless template &#8211; inline policies</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If you&#8217;ve worked with AWS Serverless templates, you&#8217;ll appreciate how quickly you can deploy a raft of infrastructure with very little template code. The only flaw I&#8217;ve found so far is the documentation is a bit tricky to find.</p>
<p>Say you want to attach some custom policies to your function, you can simply embed them into your template. E.g:</p><pre class="crayon-plain-tag">{
  &quot;AWSTemplateFormatVersion&quot;: &quot;2010-09-09&quot;,
  &quot;Transform&quot;: &quot;AWS::Serverless-2016-10-31&quot;,
  &quot;Description&quot;: &quot;An AWS Serverless Application.&quot;,
  &quot;Resources&quot;: {
    &quot;BackupTriggerGeneratorFunction&quot;: {
      &quot;Type&quot;: &quot;AWS::Serverless::Function&quot;,
      &quot;Properties&quot;: {
        &quot;Handler&quot;: &quot;BackupTriggerGenerator::BackupTriggerGenerator.Functions::FunctionHandler&quot;,
        &quot;Runtime&quot;: &quot;dotnetcore2.0&quot;,
        &quot;CodeUri&quot;: &quot;&quot;,
        &quot;MemorySize&quot;: 256,
        &quot;Timeout&quot;: 30,
        &quot;Environment&quot;: {
          &quot;Variables&quot;: {
            &quot;BucketName&quot;: &quot;...&quot;,
            &quot;FolderNames&quot;: &quot;...&quot;,
            &quot;FileName&quot;: &quot;...&quot;
          }
        },
        &quot;Role&quot;: null,
        &quot;Policies&quot;: [
          &quot;AWSLambdaBasicExecutionRole&quot;,
          &quot;AmazonS3ReadOnlyAccess&quot;,
          {
            &quot;Version&quot;: &quot;2012-10-17&quot;,
            &quot;Statement&quot;: [
              {
                &quot;Effect&quot;: &quot;Allow&quot;,
                &quot;Action&quot;: [
                  &quot;s3:Put*&quot;
                ],
                &quot;Resource&quot;: [
                  &quot;arn:aws:s3:::bucketname-*-*-*-1/Databases/*&quot;
                ]
              }
            ]
          }
        ],
        &quot;Events&quot;: {
          &quot;Schedule&quot;: {
            &quot;Type&quot;: &quot;Schedule&quot;,
            &quot;Properties&quot;: {
              &quot;Schedule&quot;: &quot;cron(30 1,3,5,7,9,11,13,15,17,19,21,23 * * ? *)&quot;
            }
          }
        }
      }
    }
  }
}</pre><p>This also shows a few other neat features:</p>
<ul>
<li>Wildcards in the custom policy name, allowing it to work across multiple buckets</li>
<li>Cron triggered events</li>
<li>How to set environment variables from your template</li>
</ul>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/aws-serverless-template-inline-policies/">AWS Serverless template &#8211; inline policies</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/aws-serverless-template-inline-policies/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Serving images through AWS Api Gateway from Serverless Lambda_proxy function</title>
		<link>https://blog.boro2g.co.uk/serving-images-aws-api-gateway-serverless-lambda_proxy-function/</link>
					<comments>https://blog.boro2g.co.uk/serving-images-aws-api-gateway-serverless-lambda_proxy-function/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Wed, 18 Apr 2018 13:15:55 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=918</guid>

					<description><![CDATA[<p>In another post I mentioned the neat features now available in AWS: Serverless templates. As part of an experiment I thought it would be interesting to see how you could put a light security layer over S3 so that media requests stream from S3 if a user logs in. The WebApi template  already ships with [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/serving-images-aws-api-gateway-serverless-lambda_proxy-function/">Serving images through AWS Api Gateway from Serverless Lambda_proxy function</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In another post I mentioned the neat features now available in AWS: <a href="https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/">Serverless templates</a>.</p>
<p>As part of an experiment I thought it would be interesting to see how you could put a light security layer over S3 so that media requests stream from S3 if a user logs in.</p>
<p>The WebApi template  already ships with an S3 proxy controller. Tweaking this slightly allowed me to stream or download the image:</p><pre class="crayon-plain-tag">[HttpGet(&quot;{*key}&quot;)]       
        public async Task Get(string key)
        {
            try
            {
                var getResponse = await S3Client.GetObjectAsync(new GetObjectRequest
                {
                    BucketName = BucketName,
                    Key = key
                });

                Response.ContentType = getResponse.Headers.ContentType;

                getResponse.ResponseStream.CopyTo(Response.Body);
            }
            catch (AmazonS3Exception e)
            {
                Response.StatusCode = (int)e.StatusCode;
                var writer = new StreamWriter(Response.Body);
                writer.Write(e.Message);
            }
        }

        [HttpGet(&quot;dl/{*key}&quot;)]        
        public async Task Download(string key)
        {
            try
            {
                var getResponse = await S3Client.GetObjectAsync(new GetObjectRequest
                {
                    BucketName = BucketName,
                    Key = key
                });
              
                return File(getResponse.ResponseStream, &quot;application/octet-stream&quot;);
            }
            catch (AmazonS3Exception e)
            {
                Response.StatusCode = (int)e.StatusCode;
                var writer = new StreamWriter(Response.Body);
                writer.Write(e.Message);
            }

            return new EmptyResult();
        }</pre><p></p>
<p>The issue I ran into was accessing images via the endpoint you then get in API Gateway &#8211; images were getting encoded so wouldn&#8217;t render in the browser. </p>
<p><strong>The solution:</strong></p>
<ul>
<li>In the settings for your api gateway, add the Binary Media Type: <strong>*/*</strong></li>
<li>On the {proxy+} method &#8216;Method Response&#8217; add a <strong>200</strong> response header and add a header <strong>Content-Type</strong>. Finally publish your api</li>
</ul>
<p>The second step here may not be necessary but I found the */* didn&#8217;t kick in until I made the change.</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/serving-images-aws-api-gateway-serverless-lambda_proxy-function/">Serving images through AWS Api Gateway from Serverless Lambda_proxy function</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/serving-images-aws-api-gateway-serverless-lambda_proxy-function/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PUB SUB in AWS Lambda via SNS using C#</title>
		<link>https://blog.boro2g.co.uk/pub-sub-aws-lambda-via-sns-using-c/</link>
					<comments>https://blog.boro2g.co.uk/pub-sub-aws-lambda-via-sns-using-c/#comments</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Wed, 21 Mar 2018 14:47:33 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=910</guid>

					<description><![CDATA[<p>AWS Lambda&#8217;s are a great replacement for things like Windows Services which need to run common tasks periodically. A few examples would be triggering scheduled backups or polling urls. You can set many things as the trigger for a lambda, for scheduled operations this can be a CRON value triggered from a CloudWatch event. Alternatively [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/pub-sub-aws-lambda-via-sns-using-c/">PUB SUB in AWS Lambda via SNS using C#</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>AWS Lambda&#8217;s are a great replacement for things like Windows Services which need to run common tasks periodically. A few examples would be triggering scheduled backups or polling urls.</p>
<p>You can set many things as the trigger for a lambda, for scheduled operations this can be a CRON value triggered from a CloudWatch event. Alternatively lambda&#8217;s can be triggered via a subscription to an SNS topic.</p>
<p>Depending on the type of operation you want to perform on a schedule you might find it takes longer than the timeout restriction imposed by AWS. If that&#8217;s the case then a simple PUB SUB (publisher, subscriber) configuration should help.</p>
<p><strong>Sample scenario</strong></p>
<p>We want to move databases backups between 2 buckets in S3. There are several databases to copy, each of which being a large file.</p>
<p>In one lambda you can easily find all the files to copy but if you also try to copy them all at some point your function will timeout.</p>
<p><strong>Pub sub to the rescue</strong></p>
<p>Why not setup 2 lambda functions? One as the publisher and one as the subscriber, and then glue the two together with an SNS topic (Simple Notification Service)</p>
<p><strong>The publisher</strong></p>
<p>Typically this would be triggered from a schedule and would look to raise events for each operations. Lets assume we use a simple POCO for converying the information we need:</p><pre class="crayon-plain-tag">class UrlRequestMessage
{
    public string[] Urls {get;set;}
}</pre><p></p>
<p></p><pre class="crayon-plain-tag">public class Function
{
	public string FunctionHandler(object input, ILambdaContext context)
	{
                //This could gather urls to poll from a file, db or anywhere
		var urlsToScan = LoadUrls();

		var snsClient = new AmazonSimpleNotificationServiceClient();

		int batchSize = 4;

		context.Logger.LogLine($&quot;Batch size: {batchSize}&quot;);

		foreach (var urlToScan in urlsToScan.Batch(batchSize))
		{
			snsClient.PublishAsync(&quot;topic arn e.g. arn:aws:sns:eu-west-1:98976####:UrlPoller_Topic&quot;,
				JsonConvert.SerializeObject(new UrlRequestMessage {Urls = urlToScan.ToArray()})).Wait();

			context.Logger.LogLine($&quot;Raised event for urls: {String.Join(&quot; | &quot;, urlToScan)}&quot;);
		}

		return &quot;ok&quot;;
	}
}</pre><p></p>
<p>The batching can be ignored if needs be &#8211; in this scenario this allows multiple urls to be handled by one subscriber.</p>
<p><strong>The subscriber</strong></p>
<p>Next we need to listen for the messages &#8211; you want to configure the subscriber function to have an SNS trigger that uses the same topic you posted to before.</p>
<p></p><pre class="crayon-plain-tag">public class Function
{
	public string FunctionHandler(SNSEvent message, ILambdaContext context)
	{
		foreach (var record in message.Records)
		{
			var decodedMessage = JsonConvert.DeserializeObject&lt;UrlRequestMessage&gt;(record.Sns.Message);

			foreach (var url in decodedMessage.Urls)
			{
                                //here you just need to implement your logic for polling a url
                                // e.g. var result = new WebClient().DownloadStringTaskAsync(url).Result;
				var requestSummary = RequestUrl(url, context.Logger);				
			}
		}

		return &quot;OK!&quot;;
	}
}</pre><p></p>
<p><strong>Debugging things</strong><br />
You can either run each function on demand and see any output directly in the Lambda test window, or dig into your cloudwatch logs for each function.</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/pub-sub-aws-lambda-via-sns-using-c/">PUB SUB in AWS Lambda via SNS using C#</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/pub-sub-aws-lambda-via-sns-using-c/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>AWS Lambda now supports Serverless applications including WebApi</title>
		<link>https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/</link>
					<comments>https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Fri, 16 Feb 2018 11:23:49 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=902</guid>

					<description><![CDATA[<p>One of the most exciting areas that I&#8217;ve seen emerging in the Cloud space recently is Serverless computing. Both AWS and Azure have their own flavour: AWS Lambda and Azure Functions. An intro into Serverless It really does what it says on the tin. You can run code but without dedicated infrastructure that you host. [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/">AWS Lambda now supports Serverless applications including WebApi</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>One of the most exciting areas that I&#8217;ve seen emerging in the Cloud space recently is Serverless computing. Both AWS and Azure have their own flavour: <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> and <a href="https://azure.microsoft.com/en-gb/services/functions/" target="_blank" rel="noopener">Azure Functions</a>.</p>
<p><strong>An intro into Serverless</strong></p>
<p>It really does what it says on the tin. You can run code but without dedicated infrastructure that you host. A good example is when building Alexa Skills.</p>
<p><em>You create AWS lambda function, in most of the languages of your choice, and then deploy into the cloud. Whenever someone uses your skill the lambda gets invoked and returns the content you need.</em></p>
<p>Behind the scenes AWS host your function in a container, if it receives traffic the container remains hot. If it doesn&#8217;t receive traffic its &#8216;frozen&#8217;. There is a very good description of this at <a href="https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-and-in-memory-caching-c9cd0844e072" target="_blank" rel="noopener">https://medium.com/@tjholowaychuk/aws-lambda-lifecycle-and-in-memory-caching-c9cd0844e072</a></p>
<p><strong>Your language of choice</strong></p>
<p>AWS Lambda supports a raft of languages: Python, Node, Java, .net core and others. Recently this has been upgraded so that it supports .net core 2.</p>
<p><strong>Doing the legwork</strong></p>
<p>With a basic lambda function you can concoct different handlers (methods) which respond to requests. This allows one lambda to service several endpoints. However, you need to do quite a lot of wiring and it doesn&#8217;t feel quite like normal WebApi programming.</p>
<p><strong>Enter the serverless applications</strong></p>
<p>This came right out the blue, but was very cool &#8211; Amazon released some starter kits that allow you run both RazorPage and WebApi applications in Lambdas!!! <a href="https://aws.amazon.com/blogs/developer/serverless-asp-net-core-2-0-applications/" target="_blank" rel="noopener">https://aws.amazon.com/blogs/developer/serverless-asp-net-core-2-0-applications/</a></p>
<p>Woah, you can write normal WebApi and deploy into a lambda. That is big.</p>
<p><strong>Quick, migrate all the things</strong></p>
<p>So I tried this. And in the most part everything worked pretty seamlessly. All the code I&#8217;d already written easily mapped into WebApi controllers I could then run locally. Tick.</p>
<p>Deploying was simple, either via Visual Studio or the <a href="https://github.com/aws/aws-lambda-dotnet/blob/master/Libraries/src/Amazon.Lambda.Tools/README.md" target="_blank" rel="noopener">dotnet lambda</a> tools. Tick.</p>
<p>Using the serverless.template that ships with the starter pack it even setup my Api Gateway. Tick.</p>
<p>Dependency injection thats inherently available in .net core all worked. Tick.</p>
<p>WebApi attribute routing all works. Tick.</p>
<p>So far so good right <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p><strong>What I haven&#8217;t quite cracked yet?</strong></p>
<p>In my original deployment (pre WebApi) I was using API level caching over a couple specific endpoints. This was path based as it was for specific methods. The new API Gateway deployment directs all traffic to a {/proxy+} url in order to route any request to the routing in your WebApi. If you turn caching on here, its a bit of a race, whichever url is hit first will fill the cache for all requests. Untick!</p>
<p>Debugging errors locally don&#8217;t always bubble startup errors very well. I have a feeling this isn&#8217;t anything Amazon related but is something worth being aware of. E.g. if you mess up your DI, it takes some ctor null debugging to find the cause. Untick.</p>
<p><strong>Summary</strong></p>
<p>I was hugely impressed with WebApi integration. Once the chinks in the path based caching at the API Gateway can get ironed out I&#8217;d consider this a very good option for handling API requests.</p>
<p>Watch this space <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/">AWS Lambda now supports Serverless applications including WebApi</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/aws-lambda-now-supports-serverless-applications-including-webapi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Copying large files between S3 buckets</title>
		<link>https://blog.boro2g.co.uk/copying-large-files-s3-buckets/</link>
					<comments>https://blog.boro2g.co.uk/copying-large-files-s3-buckets/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Tue, 30 Jan 2018 09:51:26 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=898</guid>

					<description><![CDATA[<p>There are many different scenarios you might face where you need to migrate data between S3 buckets, and folders. Depending on the use case you have several options for the language to select. Lambda&#8217;s &#8211; this could be Python, Java, JavaScript or C# Bespoke code &#8211; again, this could be any language you select. To [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/copying-large-files-s3-buckets/">Copying large files between S3 buckets</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>There are many different scenarios you might face where you need to migrate data between S3 buckets, and folders. Depending on the use case you have several options for the language to select.</p>
<ul>
<li>
Lambda&#8217;s &#8211; this could be Python, Java, JavaScript or C#
</li>
<li>
Bespoke code &#8211; again, this could be any language you select. To keep things different from above, lets add Powershell to the mix
</li>
</ul>
<p>Behind the scenes a lot of these SDK&#8217;s call into common endpoints Amazon host. As a user you don&#8217;t really need to delve too deeply into the specific endpoints unless you really need to.</p>
<p><strong>Back to the issue at hand &#8211; copying large files</strong><br />
Amazon impose a limit of roughly 5GB on regular copy operations. If you use the standard copy operations you will probably hit exceptions when the file sizes grow.</p>
<p>The solution is to use the multipart copy. It sounds complex but all the code is provided for you:</p>
<p><strong>Python</strong><br />
This is probably the easiest to do as the <strong>boto3</strong> library already does this. Rather than using <strong>copy_object</strong>, the <strong>copy</strong> function already handles multipart uploads: <a href="http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.copy" rel="noopener" target="_blank">http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.copy</a></p>
<p><strong>C#</strong><br />
The C# implementation is slightly more complex however Amazon provide good worked examples: <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjctsUsingLLNetMPUapi.html" rel="noopener" target="_blank">https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjctsUsingLLNetMPUapi.html</a></p>
<p><strong>Powershell</strong><br />
A close mimic of the C# implementation &#8211; someone has ported the C# example into powershell: <a href="https://stackoverflow.com/a/32635525/1065332" rel="noopener" target="_blank">https://stackoverflow.com/a/32635525/1065332</a></p>
<p>Happy copying!</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/copying-large-files-s3-buckets/">Copying large files between S3 buckets</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/copying-large-files-s3-buckets/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AlexaCore &#8211; a c# diversion into writing Alexa skills</title>
		<link>https://blog.boro2g.co.uk/alexacore-c-diversion-writing-alexa-skills/</link>
					<comments>https://blog.boro2g.co.uk/alexacore-c-diversion-writing-alexa-skills/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Mon, 11 Sep 2017 12:26:33 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=878</guid>

					<description><![CDATA[<p>Following the recent Amazon Prime day, I thought it was time to jump on the home assistant bandwagon &#8211; £80 seemed a pretty good deal for an Alexa Echo. If you&#8217;ve not tried writing Alexa skills there are some really good blog posts to help get started at: http://timheuer.com/blog/archive/2016/12/12/amazon-alexa-skill-using-c-sharp-dotnet-core.aspx Skills can be underpinned by an [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/alexacore-c-diversion-writing-alexa-skills/">AlexaCore &#8211; a c# diversion into writing Alexa skills</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Following the recent Amazon Prime day, I thought it was time to jump on the home assistant bandwagon &#8211; £80 seemed a pretty good deal for an Alexa Echo.</p>
<p>If you&#8217;ve not tried writing Alexa skills there are some really good blog posts to help get started at: <a href="http://timheuer.com/blog/archive/2016/12/12/amazon-alexa-skill-using-c-sharp-dotnet-core.aspx" target="_blank">http://timheuer.com/blog/archive/2016/12/12/amazon-alexa-skill-using-c-sharp-dotnet-core.aspx</a></p>
<p>Skills can be underpinned by an AWS lambda function. In experimenting with writing these I&#8217;ve started putting together some helpers which remove a lot of the boiler-plate code needed for C# Alexa Lambda functions, including some fluent tools for running unit tests.</p>
<p>The code and some examples are available at <a href="https://github.com/boro2g/AlexaCore" target="_blank">https://github.com/boro2g/AlexaCore</a>. Hopefully you will find them help get your skills off the ground!</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/alexacore-c-diversion-writing-alexa-skills/">AlexaCore &#8211; a c# diversion into writing Alexa skills</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/alexacore-c-diversion-writing-alexa-skills/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Log aggregation in AWS – part 3 – enriching the data sent into ElasticSearch</title>
		<link>https://blog.boro2g.co.uk/log-aggregation-aws-part-3-enriching-data-sent-elasticsearch/</link>
					<comments>https://blog.boro2g.co.uk/log-aggregation-aws-part-3-enriching-data-sent-elasticsearch/#respond</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Fri, 09 Jun 2017 14:38:55 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=851</guid>

					<description><![CDATA[<p>This is the third, and last part of the series that details how to aggregate all your log data within AWS. See Part 1 and Part 2 for getting started and keeping control of the size of your index. By this point you should have a self sufficient ElasticSearch domain running that pools logs from all [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/log-aggregation-aws-part-3-enriching-data-sent-elasticsearch/">Log aggregation in AWS – part 3 – enriching the data sent into ElasticSearch</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the third, and last part of the series that details how to aggregate all your log data within AWS. See <a href="/log-aggregation-aws-part-1/">Part 1</a> and <a href="/log-aggregation-aws-part-2-keeping-index-control/">Part 2</a> for <a href="/log-aggregation-aws-part-1/">getting started</a> and <a href="/log-aggregation-aws-part-2-keeping-index-control/">keeping control of the size of your index</a>.</p>
<p>By this point you should have a self sufficient ElasticSearch domain running that pools logs from all the CloudWatch log groups that have been configured with the correct subscriber.</p>
<p><strong>The final step will be how we can enrich the data being sent into the index?</strong></p>
<p>By default AWS will set you up a lambda function that extracts information from the CloudWatch event. It will contain things like: instanceId, event timestamp, the source log group and a few more. This is handled in the lambda via:</p><pre class="crayon-plain-tag">var source = buildSource(logEvent.message, logEvent.extractedFields);
source['@id'] = logEvent.id;
source['@timestamp'] = new Date(1 * logEvent.timestamp).toISOString();
source['@message'] = logEvent.message;
source['@owner'] = payload.owner;
source['@log_group'] = payload.logGroup;
source['@log_stream'] = payload.logStream;</pre><p><em>Note, a tip around handling numeric fields &#8211; in order for ElasticSearch to believe fields are numbers rather than strings you can multiply the value by 1 e.g.: <strong>source[&#8216;@fieldName&#8217;] = 1*value;</strong></em></p>
<p><strong>What to enrich the data with?</strong></p>
<p>This kind of depends on your use case. As we were aggregating logs from a wide range of boxes, applications and services, we wanted to enrich the data in the index with the tags applied to each instance. This sounds simple in practice but needed some planning around how to access the tags for each log entry &#8211; I&#8217;m not sure AWS would look kindly on making 500,000 API requests in 15 mins!</p>
<p><strong>Lambda and caching</strong></p>
<p>Lambda functions are a very interesting offering and I&#8217;m sure you will start to see a lot more use cases for them over the next few years. One challenge they bring is they are stateless &#8211; in our scenario we need to provide a way of querying an instance for its tags based of its InstanceId. Enter DynamoDb, another AWS service that provides scalable key-value pair storage.</p>
<p>Amazon define Dynamo as: <em>Amazon DynamoDB is a fully managed non-relational database service that provides fast and predictable performance with seamless scalability.</em></p>
<p><strong>Our solution</strong></p>
<p>There were 2 key steps to the process:</p>
<ol>
<li>Updating dynamo to gather tag information from our running instances</li>
<li>Updating the lambda script to pull the tags from dynamo as log entries are processed</li>
</ol>
<p><strong>1. Pushing instance data into Dynamo</strong></p>
<p>Setup a lambda function that would periodically scan all running instances in our account and push the respective details into Dynamo.</p>
<ol>
<li>Setup a new Dynamo db table
<ol>
<li>Named: <strong>kibana-instanceLookup</strong></li>
<li>Region: <strong>eu-west-1 </strong>(note, adjust this as you need)</li>
<li>Primary partition key: <strong>instanceId</strong> (of type string)
<ol>
<li><em>Note &#8211; we will tweak the read capacity units once things are up and running &#8211; for production we average about 50 </em></li>
</ol>
</li>
</ol>
</li>
<li>Setup a new lambda function
<ol>
<li>Named: <strong>LogsToElasticSearch_InstanceQueries_regionName </strong></li>
<li>Add environment variable: <strong>region=eu-west-1</strong>
<ol>
<li><em>Note, if you want this to pool logs from several regions into one dynamo setup a lambda function per region and set different environment variables for each. You can use the same trigger and role for each region</em></li>
</ol>
</li>
<li>Use the script shown below</li>
<li>Set the execution timeout to be: <strong>1 minute</strong> (note, tweak this if the function takes longer to run)</li>
<li>Create a new role and give the following permissions:
<ol>
<li><strong>AmazonEC2ReadOnlyAccess</strong> (assign the OTB policy)</li>
<li>Plus add the following policy:</li>
<li>
<pre class="crayon-plain-tag">{
	&quot;Effect&quot;: &quot;Allow&quot;,
	&quot;Action&quot;: &quot;dynamoDb:*&quot;,
	&quot;Resource&quot;: &quot;arn:aws:dynamodb:eu-west-1:###:table/kibana-instanceLookup&quot;
}</pre></p>
<ol>
<li><em>Note, the ### in the role wants to be your account id</em></li>
</ol>
</li>
</ol>
</li>
<li>Setup a trigger within <strong>Cloudwatch</strong> -> <strong>Rules</strong>
<ol>
<li>To run hourly, set the cron to be: <strong>0 * * * ? *</strong></li>
<li>Select the target to be your new lambdas
<ol>
<li><em>Note, you can always test your lambda by simply running on demand with any test event</em></li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
</ol>
<p>And the respective script:</p><pre class="crayon-plain-tag">var AWS = require(&quot;aws-sdk&quot;);

var tableName = 'kibana-instanceLookup';

exports.handler = function (input, context)
{
	AWS.config.update({ region: process.env.region });

	this._ec2 = new AWS.EC2();

	var queryParams = {
		MaxResults: 200
	};

	var instances = [];

	this._ec2.describeInstances(queryParams, function (err, data)
	{
		if (err) console.log(err, err.stack);
		else
		{
			data.Reservations.forEach((r) =&gt;
			{
				r.Instances.forEach((i) =&gt;
				{
					instances.push({ &quot;instanceId&quot;: i.InstanceId, &quot;tags&quot;: i.Tags });
				});
			});

			console.log(JSON.stringify(instances));

			instances.forEach((instance) =&gt;
			{
				pushInstanceToDynamo(instance);
			});
		}
	});

	function pushInstanceToDynamo(instance)
	{
		AWS.config.update({ region: 'eu-west-1' });

		var params = {
			Key: {
				&quot;instanceId&quot;: {
					S: instance.instanceId
				}
			},
			TableName: tableName
		};

		new AWS.DynamoDB().getItem(params, function (err, data)
		{
			if (err)
			{
				console.log(err, err.stack);
			}
			else
			{
				if (data.Item)
				{
					console.log(&quot;Item found in dynamo - not updating &quot; + instance.instanceId);
				}
				else
				{
					var tags = {};
					tags.L = [];
					instance.tags.forEach((tag) =&gt;
					{
						if (tag.Key.indexOf(&quot;aws:&quot;) === -1)
						{
							tags.L.push(buildArrayEntry(tag));
						}
					});

					var insertParams = {
						Item: {
							&quot;instanceId&quot;: {
								S: instance.instanceId
							},
							&quot;tags&quot;: tags
						},
						ReturnConsumedCapacity: &quot;TOTAL&quot;,
						TableName: tableName
					};

					new AWS.DynamoDB().putItem(insertParams, function (err, data)
					{
						if (err) console.log(err, err.stack);
						else console.log(data);
					});
				}
			}
		});
	}

	function buildArrayEntry(tag)
	{
		var m = {};

		m[tag.Key] = { &quot;S&quot;: tag.Value };

		return { &quot;M&quot;: m };
	}
}</pre><p><em>Note, if your dynamo runs in a different region to eu-west-1, update the first line of the <strong>pushInstanceToDynamo</strong> method and set the desired target region.</em></p>
<p>Running on demand should then fill your dynamo with data e.g.:</p>
<p><a href="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/dynamo.jpg"><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-857" src="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/dynamo.jpg" alt="" width="775" height="344" srcset="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/dynamo.jpg 775w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/dynamo-300x133.jpg 300w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/dynamo-768x341.jpg 768w" sizes="(max-width: 775px) 100vw, 775px" /></a></p>
<p><strong>2. Querying dynamo when you process log entries</strong></p>
<p>The final piece of the puzzle is to update the streaming function to query dynamo as required. This needs a few things:</p>
<ol>
<li>Update the role used for the lambda that streams data from CloudWatch into ElasticSearch</li>
<li>
<pre class="crayon-plain-tag">{
	&quot;Effect&quot;: &quot;Allow&quot;,
	&quot;Action&quot;: &quot;dynamodb:GetItem&quot;,
	&quot;Resource&quot;: &quot;arn:aws:dynamodb:eu-west-1:###:table/kibana-instanceLookup&quot;
}</pre><br />
where ### is your account id</li>
<li>Update the lambda script setup in stage 1 and tweak as shown below</li>
</ol>
<p>Add the AWS variable to the requires at the top of the file:</p><pre class="crayon-plain-tag">var AWS = require(&quot;aws-sdk&quot;);</pre><p>Update the exports.handler &amp; transform methods and add loadFromDynamo to be:</p><pre class="crayon-plain-tag">exports.handler = function (input, context)
{
	this._dynamoDb = new AWS.DynamoDB();	

	// decode input from base64
	var zippedInput = new Buffer(input.awslogs.data, 'base64');

	// decompress the input
	zlib.gunzip(zippedInput, function (error, buffer)
	{
		if (error) { context.fail(error); return; }

		// parse the input from JSON
		var awslogsData = JSON.parse(buffer.toString('utf8'));

		// transform the input to Elasticsearch documents
		transform(awslogsData, (elasticsearchBulkData) =&gt;
		{
			// skip control messages
			if (!elasticsearchBulkData)
			{
				console.log('Received a control message');
				context.succeed('Control message handled successfully');
				return;
			}

			// post documents to the Amazon Elasticsearch Service
			post(elasticsearchBulkData, function (error, success, statusCode, failedItems)
			{
				console.log('Response: ' + JSON.stringify({
					&quot;statusCode&quot;: statusCode
				}));

				if (error)
				{
					console.log('Error: ' + JSON.stringify(error, null, 2));

					if (failedItems &amp;amp;&amp;amp; failedItems.length &gt; 0)
					{
						console.log(&quot;Failed Items: &quot; +
							JSON.stringify(failedItems, null, 2));
					}

					context.fail(JSON.stringify(error));
				} else
				{
					console.log('Success: ' + JSON.stringify(success));
					context.succeed('Success');
				}
			});
		});
	});
};

function transform(payload, callback)
{
	if (payload.messageType === 'CONTROL_MESSAGE')
	{
		return null;
	}

	var bulkRequestBody = '';

	var instanceId = payload.logStream;

	if (instanceId.indexOf(&quot;.&quot;) &gt; -1)
	{
		instanceId = instanceId.substring(0, instanceId.indexOf(&quot;.&quot;));
	}

	loadFromDynamo(instanceId,
		(dynamoTags) =&gt;
		{
			payload.logEvents.forEach(function (logEvent)
			{
				var timestamp = new Date(1 * logEvent.timestamp);

				// index name format: cwl-YYYY.MM.DD
				var indexName = [
					'cwl-' + timestamp.getUTCFullYear(),              // year
					('0' + (timestamp.getUTCMonth() + 1)).slice(-2),  // month
					('0' + timestamp.getUTCDate()).slice(-2)          // day
				].join('.');				

				var source = buildSource(logEvent.message, logEvent.extractedFields);
				source['@id'] = logEvent.id;
				source['@timestamp'] = new Date(1 * logEvent.timestamp).toISOString();
				source['@message'] = logEvent.message;
				source['@owner'] = payload.owner;
				source['@log_group'] = payload.logGroup;
				source['@log_stream'] = payload.logStream;				

				var action = { &quot;index&quot;: {} };
				action.index._index = indexName;
				action.index._type = payload.logGroup;
				action.index._id = logEvent.id;

				bulkRequestBody += [
					JSON.stringify(action),
					JSON.stringify(Object.assign({}, source, dynamoTags))
				].join('\n') + '\n';
			});
			callback(bulkRequestBody);
		});
}

function loadFromDynamo(instanceId, callback)
{
	var tagsSource = {};

	try
	{
		var params = {
			Key: {
				&quot;instanceId&quot;: {
					S: instanceId
				}
			},
			TableName: &quot;kibana-instanceLookup&quot;
		};
		this._dynamoDb.getItem(params, function (err, data)
		{
			if (err)
			{
				console.log(err, err.stack);
				callback(tagsSource);
			}
			else
			{
				if (data.Item) 
				{
					data.Item.tags.L.forEach((tag) =&gt;
						{

							var key = Object.keys(tag.M)[0];
							tagsSource['@' + key] = tag.M[key].S;
						});
				}

				callback(tagsSource);
			}
		});
	}
	catch (exception)
	{
		console.log(exception);
		callback(tagsSource);
	}
}</pre><p>The final step is to refresh the index definition within Kibana: <strong>Management -> Index patterns -> Refresh field list</strong>.</p>
<p><strong>Final thoughts</strong><br />
There are a few things to keep an eye on as you roll this out &#8211; bear in mind these may need tweaking over time:</p>
<ul>
<li>The lambda function that scans EC2 times out, if so, up the timeout</li>
<li>The elastic search index runs out of space, if so, adjust the environment variables used in step 2</li>
<li>The dynamo read capacity threshold hits its ceiling, if so increase the read capacity (this can be seen in the Metrics section of the table in Dynamo)</li>
</ul>
<p>Happy logging!</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/log-aggregation-aws-part-3-enriching-data-sent-elasticsearch/">Log aggregation in AWS – part 3 – enriching the data sent into ElasticSearch</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/log-aggregation-aws-part-3-enriching-data-sent-elasticsearch/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Log aggregation in AWS &#8211; part 2 &#8211; keeping your index under control</title>
		<link>https://blog.boro2g.co.uk/log-aggregation-aws-part-2-keeping-index-control/</link>
					<comments>https://blog.boro2g.co.uk/log-aggregation-aws-part-2-keeping-index-control/#comments</comments>
		
		<dc:creator><![CDATA[boro]]></dc:creator>
		<pubDate>Tue, 06 Jun 2017 10:35:56 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://blog.boro2g.co.uk/?p=842</guid>

					<description><![CDATA[<p>This is the second part in the series as a follow on to /log-aggregation-aws-part-1/ Hopefully by this point you&#8217;ve now got kibana up and running, gathering all the logs from each of your desired CloudWatch groups. Over time the amount of data being stored in the index will constantly be growing so we need to keep [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/log-aggregation-aws-part-2-keeping-index-control/">Log aggregation in AWS &#8211; part 2 &#8211; keeping your index under control</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the second part in the series as a follow on to <a href="/log-aggregation-aws-part-1/">/log-aggregation-aws-part-1/</a></p>
<p>Hopefully by this point you&#8217;ve now got kibana up and running, gathering all the logs from each of your desired CloudWatch groups. Over time the amount of data being stored in the index will constantly be growing so we need to keep things under control.</p>
<p><a href="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/elastic-search-disk-space.jpg"><img decoding="async" class="alignnone size-full wp-image-843" src="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/elastic-search-disk-space.jpg" alt="" width="800" height="358" srcset="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/elastic-search-disk-space.jpg 800w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/elastic-search-disk-space-300x134.jpg 300w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/elastic-search-disk-space-768x344.jpg 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p>
<p>Here is a good view of the issue. We introduced our cleanup lambda on the 30th, if we hadn&#8217;t I reckon we&#8217;d have about 2 days more uptime before the disks ran out. The oscillating items from the 31st onward are exactly what we&#8217;d want to see &#8211; we delete indices older than 10 days every day.</p>
<p>Initially this was done via a scheduled task from a box we host &#8211; it worked but wasn&#8217;t ideal as it relies on the box running, potentially user creds and lots more. What seemed a better fit was to use AWS Lambda to keep our index under control.</p>
<p><strong>Getting setup</strong></p>
<p>Luckily you don&#8217;t need to setup much for this. One AWS Lambda, a trigger and some role permissions and you should be up and running.</p>
<ol>
<li>Create a new lambda function based off the script shown below</li>
<li>Add 2 environment variables:
<ol>
<li><strong>daysToKeep</strong>=<strong>10</strong></li>
<li><strong>endpoint=</strong>elastic search endpoint e.g. <strong>search-###-###.eu-west-1.es.amazonaws.com</strong></li>
<li><a href="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/kibana-lambda-variables.jpg"><img decoding="async" class="alignnone size-full wp-image-844" src="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/kibana-lambda-variables.jpg" alt="" width="774" height="130" srcset="https://blog.boro2g.co.uk/wp-content/uploads/2017/06/kibana-lambda-variables.jpg 774w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/kibana-lambda-variables-300x50.jpg 300w, https://blog.boro2g.co.uk/wp-content/uploads/2017/06/kibana-lambda-variables-768x129.jpg 768w" sizes="(max-width: 774px) 100vw, 774px" /></a></li>
</ol>
</li>
<li>Create a new role as part of the setup process
<ol>
<li>Note, these can then be found in the IAM section of AWS e.g.  https://console.aws.amazon.com/iam/home?region=eu-west-1#/roles</li>
<li>Update the role to allow Get and Delete access to your index with the policy:</li>
<li>
<pre class="crayon-plain-tag">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [
                &quot;es:ESHttpGet&quot;,
                &quot;es:ESHttpDelete&quot;
            ],
            &quot;Resource&quot;: &quot;ARN of elastic search index&quot;
        }
    ]
}​</pre>
</li>
</ol>
</li>
<li>Setup a trigger (in CloudWatch -&gt; Events -&gt; Rules)
<ol>
<li>Here you can set the frequency of how often to run e.g. a CRON of<br />
<pre class="crayon-plain-tag">0 2 * * ? *​​</pre><br />
will run at 2am every night</li>
</ol>
</li>
<li>Test your function, you can always run on demand and then check whether the indices have been removed</li>
</ol>
<p>And finally the lambda code:</p>
<p></p><pre class="crayon-plain-tag">var AWS = require('aws-sdk');

var endpoint; 
var creds = new AWS.EnvironmentCredentials('AWS');

Date.prototype.addDays = function(days) {
	var dat = new Date(this.valueOf());
	dat.setDate(dat.getDate() + days);
	return dat;
}

exports.handler = function(input, context)
{
	endpoint = new AWS.Endpoint(process.env.endpoint);

	let dateBaseline = new Date();

	dateBaseline = dateBaseline.addDays(-parseInt(process.env.daysToKeep));

	console.log(&quot;Date baseline: &quot; + dateBaseline.toISOString());

	getIndices(context, function(data)
	{
		data.split('\n').forEach((row) =&gt;
			{
				let parts = row.split(&quot; &quot;);
				
				if (parts.length &gt; 2)
				{
					let indiceName = parts[2];

					if (indiceName.indexOf(&quot;cwl&quot;) &gt; -1)
					{
						let indiceDate = new Date(indiceName.substr(4, 4), indiceName.substr(9, 2)-1, indiceName.substr(12, 2));
						
						if (indiceDate &lt; dateBaseline)
						{
							console.log(&quot;Planning to delete indice: &quot; + indiceName);

							removeIndice(&quot;/&quot;+indiceName, context);
						}
					}
				}
			});
	});
}

function removeIndice(indiceName, context) 
{
	makeRequest(&quot;DELETE&quot;, indiceName, context);
}

function getIndices(context, callback)
{
	makeRequest(&quot;GET&quot;, '/_cat/indices', context, callback);
}

function makeRequest(method, path, context, callback)
{
	console.log(`Making ${method} call to ${path}`);

	var req = new AWS.HttpRequest(endpoint);

	req.method = method;
	req.path = path;
	req.region = &quot;eu-west-1&quot;;
	req.headers['presigned-expires'] = false;
	req.headers['Host'] = endpoint.host;

	var signer = new AWS.Signers.V4(req, 'es');
	signer.addAuthorization(creds, new Date());

	var send = new AWS.NodeHttpClient();
	send.handleRequest(req,
		null,
		function(httpResp)
		{
			var respBody = '';
			httpResp.on('data',
				function(chunk)
				{
					respBody += chunk;
				});
			httpResp.on('end',
				function(chunk)
				{
					if (callback)
					{
						callback(respBody);
					}
					//console.log(respBody);
				});
		},
		function(err)
		{
			console.log('Error: ' + err);
			context.fail('Lambda failed with error ' + err);
		});
}</pre><p><em>Note, if you are running in a different region you will need to tweak <strong>req.region = &#8220;eu-west-1&#8221;;</strong></em></p>
<p><strong>How does it work?</strong></p>
<p>Elastic search allows you to query the index to find all indices via the url: <strong>/_cat/indices</strong>. The lambda function makes a web request to this url, parses each row and finds any indices that match the name: <strong>cwl-YYYY.MM.dd</strong>. If an indice is found that is older than <strong>days to keep</strong>, a delete request is issued to elasticSearch</p>
<p><strong>Was this the best option?</strong></p>
<p>There are tools available for cleaning up old indices, even ones that Elastic themselves provide: <a href="https://github.com/elastic/curator">https://github.com/elastic/curator</a> however this requires additional boxes to run hence the choice for keeping it wrapped in a simple lambda.</p>
<p>Happy indexing!</p>
<p>The post <a rel="nofollow" href="https://blog.boro2g.co.uk/log-aggregation-aws-part-2-keeping-index-control/">Log aggregation in AWS &#8211; part 2 &#8211; keeping your index under control</a> appeared first on <a rel="nofollow" href="https://blog.boro2g.co.uk">blog.boro2g .co.uk</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.boro2g.co.uk/log-aggregation-aws-part-2-keeping-index-control/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
	</channel>
</rss>
