Why is choosing a CMS so damn hard?

Imagine the scenario – you start on a new feature or project and there is a need for dynamic content. Sounds simple right? Just pick a CMS platform, setup an account, update a bit of content, publish and you are done. Well, if only it was that simple! *

*Note – this post assumes that a platform like WordPress isn’t sufficient for your requirements

Where to start?

If you look at https://en.wikipedia.org/wiki/List_of_content_management_systems, it certainly won’t clear things up. There are a LOT of options! So, what sort of information should you use to feed into your decision process?

A few core CMS concepts

Before we go further, let’s define a few key concepts:

  • Headless Content Management System (CMS) – “A headless CMS is a content management system that provides a way to author content, but instead of having your content coupled to a particular output (like web page rendering), it provides your content as data over an API.” https://www.sanity.io/blog/headless-cms-explained
  • Digital Experience Platform (DXP) – “Gartner defines a digital experience platform (DXP) as an integrated set of technologies, based on a common platform, that provides a broad range of audiences with consistent, secure and personalized access to information and applications across many digital touchpoints.” https://www.gartner.com/reviews/market/digital-experience-platforms

It’s worth noting that certain vendors aim to fulfil both entries above, whereas others operate purely as headless, cloud native SAAS providers.

How to help you make a decision?

Ah, but what if the decision has already been made?

Within your team(s) or business(es), do you have an existing CMS? If so, can it be scaled or modified to serve your new needs. It’s worth considering that ‘scaled’ here covers many things – licensing, usability, modifiability, supportability, physical capacity and a raft more. This discussion often leads to some interesting outcomes and can easily expose issues, or the opposite, a positive view of existing tooling.

Ok, so we already successfully use CMS X

We’re getting warmer, but I’d suggest you still need to answer a few more questions:

  • Is it fit for purpose?
  • Do it’s content delivery approaches fit the needs of your new requirements?
  • Will the team that use the system be the same as the existing editors?

How to select a new CMS?

I’d recommend you build up your own criteria for assessing different tools, here are a few thought starters:

  • Cost
    • What are the license fees, and how do they scale?
      • Is it a consistent cost year by year?
      • What if you need more editors?
      • What if you need more content items, or media items?
      • What if you need to serve more traffic?
      • How much would a new environment cost?
    • How much does it cost to run and maintain the system?
      • What hosting costs will you incur?
      • How much does a release cost?
      • What cost lies with your different DR options?
      • How will the infra receive security patches and software upgrades?
      • What does an upgrade of the tool look like? Is it handled for you, or do you need to own an upgrade?
        • Note. This has stung us hard in the past with certain vendors!
    • How much effort/cost is required to set it up before you can focus on delivery of features to the customer?
  • Features
    • Does the tool support the features you require?
    • Or, does the tool come with features you don’t require?
      • This is an interesting point – are you buying a Ferrari when all you need is a Ford?
    • Are your competitors using the same tool?
      • Does it suit your business model?
    • What multi-lingual requirements do you have?
      • And how does that map to content and presentation?
  • Technology constraints
    • Are there any technology restrictions imposed by the tools
      • E.g. hosting options, language choices, CI/CD patterns, tooling constraints
      • Who owns the hosted platform, and how do backups work?
      • Does the location of data matter for your business?
  • Platform vs a tool
    • This ties into the concepts above, do you want a DXP or a headless CMS
    • Is a composable architecture desirable for your team(s)?
  • Out the box vs bespoke
    • What comes ‘for free’? And, do you even want the ‘free’ features?
      • If we think of enterprise platforms such as Sitecore, you get a lot OTB for free. E.g. the concept of sites, pipelines, commands and many more.
      • If you go down the headless route this lies in your dev teams hands.
  • Building a team
    • Can you even build a team around tool X?
    • Do you have in-house experience in the tool or associated tools?
  • Support
    • What if something goes wrong, what support can you get?
      • Note, I’d see support running from before you sign the contracts all the way through to post live ongoing support
  • Scalability, performance and common NFR’s
    • Will the tool scale and perform to your requirements?

It’s worth noting, this is not meant to be an exhaustive list – every project will have different requirements and metrics that get prioritized. The goal is to provide some thought starters in areas we’ve found useful in the past.

Finally, the fun part – rolling it out

Well, almost. Now the fun / hard part (omit for your preference :)).

You have your new tool, but how does it map to the business? How will the editors get on with it? What does multi-lingual design look like? What technology do you use to build the front ends? Where to start? What is the meaning of life?

Maybe that’s content for another blog post…

Happy editing!

Machathon 2021 – A realtime team mood board

The MACH Alliance are running a ‘Machathon‘ at the start of 2021 – the theme is ‘how to help people get virtually un-stuck’.

It sounded like a great opportunity to get hands on with a variety of MACH technologies! I hope you enjoy the idea and demo.

Some key urls:

The elevator pitch

Everyone is spending a lot more time isolated from their friends, family and colleagues – so why not make it as easy as possible for people to see how you are with a shared ‘mood board’.

Each team member can be authored in a CMS and then have their status and desired outcome updated in realtime based on their mood.

The realtime board is a web UI. There is also an Alexa skill to provide a summary of the team mood.

What makes a good outcome?

It’s really down to the user on how much they want to share – but could be something as simple as ‘please call me for a chat’, through to ‘it’s a head down kinda day’, or ‘today is a good day, can I help you?’.

I’d recommend some form of socially acceptable ego/gloat filter is applied to messages from the team, no-one likes a smarty pants or gloat monster in their face!

And now the fun bit, the tech!

The demo video shows all of this in action, complete with alexa fails and a guided tour of how it glues together under the hood!

As a quick summary:

  • Content
  • Cloud
    • AWS
      • S3 and Cloudfront for WebUI
      • Lambda for API, Orchestration and Alexa skill
      • SNS for fan-out
      • Secrets Manager
      • CloudWatch
      • ECR for container images
    • Azure
      • Azure functions for SignalR trigger
      • SignalR service for realtime updates
  • CI / CD
  • Home Assistant running on Raspberry PI

The underlying code makes use of a mixture of technologies:

  • Data access
    • Contentful Management API
    • GraphQL
  • UI
    • Nuxt and SSG generated version of site
  • API
    • Node JS and containers running in Lambda
  • Orchestrator functions
    • A mixture of dotnetcore and .net5

Triggering behaviour

If you’ve not seen Home Assistant, I’d highly recommend it for anyone interested in tinkering with smart home / iot appliances! It allows you to setup scripts, automations and a raft more all linked to IOT devices.

In our setup the trigger is:

TRΓ…DFRI, Remote control - IKEA

The different buttons are mapped to different actions:

  • Top button = send ‘happy’
  • Middle = send ‘ok’
  • Bottom = send ‘sad’
  • Left button = change user
  • Right button = change user

Behind the scenes, a custom lovelace UI is updated based on button presses and finally it notifies a Lambda function to initiate the process.

What are we actually orchestrating?

When you press a button this does a few things:

  • Sets the user mood dropdown in the UI
  • Pings a message to the orchestration lambda which:
    • Updates contentful with mood and outcome
    • Raises an SNS event – the subscribers then:
      • Ping SignalR to update the UI in realtime
      • Ping github actions to trigger a SSG build

What content to model

Contentful is used for several types of data in this setup, including basic data on the user, their mood, any custom outcomes they want and finally all the content for the Alexa skill.

Summary

I really hope you’ve found this a good use case of MACH technologies. What is refreshing about the approaches above are how quickly you can adapt and change. Making use of OTB cloud functionality provides a very rich toolset for multi channel, multi device applications. Oh, and it’s a lot of fun to play with πŸ™‚

The downsides? Well, it’s a pretty complicated way to let people know you are having a bad day!

Monolith, microservice or something in-between

In a recent project we had an interesting challenge as to how we structured and architected our dotnetcore web api’s. We wanted development and deployment agility, but to maintain the flexibility that comes from micro(macro*)services.

* Arguably the term you use here depends on how you choose to cut your system and services

What to expect

Well, maybe lets start with the opposite – what not to expect?

  • This isn’t aimed to be a starter kit, it’s goal is to provide examples
  • How you choose to cut up your apis, well that’s one for you too – sorry
  • And finally, how you choose to name things (models vs data, services vs domain, core vs base, shared vs common) – yep, that’s also up to you

Now the good bits, what to expect?

The goal of the example project (https://github.com/boro2g/MonolithToMicroserviceApi) is to show working examples of the following scenarios:

  • Each WebApi project can run in isolation, without knowledge of others
    • The same mindset applies to deployments – each WebApi can be deployed in isolation
  • All the WebApi’s can be run and deployed as a single application

Why add this extra complexity?

Good question. The key thing for me is flexibility. If it’s done right you give yourself options – you want to deploy as a monolith, no problem. You want to deploy each bit in isolation, well that’s fine too.

How does it look?

Some key highlights of the image above:

  1. MonolithToMicroserviceApi.WebApi
    1. This is the shared singular WebApi project that brings everything together
    2. You can run this via IISExpress, or IIS etc and all the Api’s from the other projects will work within it
  2. MonolithToMicroserviceApi.Search.WebApi
    1. This is the search micro(macro) service
    2. You can run this in isolation, much like you can the common one
  3. MonolithToMicroserviceApi.Weather.WebApi
    1. The same concept as Search, but with other example controllers and code
  4. MonolithToMicroserviceApi.Shared.*
    1. These libraries contain common functionality that’s shared between each WebApi

Adding a new WebApi

The search project has a good example of this. If you look in MonolithToMicroserviceApi.Search.WebApi.Startup

You need to add the ApiConfiguration class itself (see the project for examples), the ApiConfigurations code above and then register them all.

Similarly in the common project startup (MonolithToMicroserviceApi.WebApi.Startup). Simply add each ApiConfiguration and register them.

The Api glue

So how does it all glue together? The key underlying code that allows you to pool controllers from one project into another is:

What issues you might run into?

  • Routing
    • There is a commented out example of this – in the core project and weather project we have a ‘WeatherForecastController’ – if both of these have the same [Route] attribute you will get an exception.
    • A simple fix is to ensure each controller has isolated routes. I’m sure a more clever approach could be used if you have LOTS of WebApi projects, but I’ll leave that for you to work out
  • Dependency bleeding
    • I don’t feel like this approach introduces any more risk of either cyclic dependencies or ‘balls of mud‘ – IMO that comes down the discipline of the team building your solutions.

Summary

What I like about this approach is flexibility. On day 1 you can deploy your common project to a single box and all your api’s are working in one place. Over time, as complexity grows, or your dev teams evolve, different parts can be cut apart but without any fundamental changes needed.

You need to scale your search api, well no problem – deploy it as a single api and scale as you need.

You need to push the weather api to multiple data centres for geo reasons, cut it out and deploy as you want.

Another team needs to own search, again thats fine – you could even pull out to another solution, remove the ApiConfiguration and everyone is happy!? πŸ™‚

I hope it provides some good inspiration. It really doesn’t take much code, or configuration to build what I’d consider to be a very flexible approach to structuring your dotnetcore WebApi projects.

Sitecore forms – custom form element save issue

In a recent project we needed to add some richer functionality to a form, so decided to wrap it up in a custom Vue.js component which we could then integrate into Sitecore forms. Sounds simple right?

Building the component

Sitecore provides some good documentation on how to build different flavours of form rows – have a look at the walkthrough’s in https://doc.sitecore.com/developers/93/sitecore-experience-manager/en/sitecore-forms.html if you are interested.

Saving your data

Assuming you want to build something a bit richer than the demo video component, chances are you want to actually store data that a user provides. In our use case, we used Vue.js to update a hidden input – under the hood we then save that data into the DB and also ping off to other save actions.

Simples? Well, not quite – unless you know where to set things up.

Configuring the form row

In Sitecore forms, a custom form row needs a few things. A template in master to represent the configuration of the form row, and a set of items in core to represent the UI for configuring the form row.

https://doc.sitecore.com/developers/93/sitecore-experience-manager/en/walkthrough–creating-a-custom-form-element.html

The importance of AllowSave

This is the key bit, and took a fair amount of digging to find. I could see my custom data was being posted back to Sitecore, with the right data. But, it was never getting saved in the database πŸ™

To fix I needed to make sure that both the configuration in core and my custom template had AllowSave available.

  • In core, under ‘/sitecore/client/Applications/FormsBuilder/Components/Layouts/PropertyGridForm/PageSettings/Settings’ you create your custom configuration including sub-items based off the template ‘FormSection’ (see ‘/sitecore/client/Applications/FormsBuilder/Components/Layouts/PropertyGridForm/PageSettings/Settings/SingleLineText/Advanced’ for reference’
    • Here is where you need to ensure you include ‘AllowSave‘ in the ‘ControlDefinitions‘ field for your custom item
    • This is enough to get the checkbox showing in the form builder ui, but not enough to get everything working
  • In master, under ‘/sitecore/templates/System/Forms/Fields’ you create the template to represent the configuration data being saved for your form element
    • Here is where you need to make sure the base templates contains ‘Save Settings

Summary

Setting up a custom form row / element is generally pretty simple. However, the documentation doesn’t cover quite a key step – saving the data. It doesn’t take much additional configuration as long as you know the right place to make changes!

Happy saving.

Livestreaming with Serato, OBS, multiple webcams and a set of novation dicers

In the immortal words of Monty Python… And now for something completely different!

I wanted to have a play with some livestreaming so we could host a virtual birthday party for my wife. In the current climate this is all the rage so figured it should be pretty straight forwards to setup. But, how could we make it a bit more interactive?

action shot

Getting started

It’s really quick and easy to start streaming as long as you have a few things setup:

  • Good internet – you don’t want things cutting out because your bandwidth can’t cut it
  • Some streaming software – I’d recommend OBS
  • Some form of webcam. A lot of online retailers seem sold out of webcams atm so you could always use your phone
    • I found #LiveDroid worked really well on my android phone
  • A platform, or multiple platforms to stream onto e.g. facebook, twitch, mixcloud live etc.

With all this setup you should be able to test and get your stream on. Nice!

Stepping things up a bit

Good start – you can now get streaming! But, lets have a bit more fun. What about if you want to show multiple cameras, or add fancy graphics, split up the stream UI or stream to multiple platforms all at once?

First up, I’d really recommend getting to know OBS – in particular how the different input sources can be used and layered. This allows you to mix and match multiple cameras, graphics and chunks of screen into one scene.

Next, look at scenes – here you can show different combinations of cameras/overlays/sources. You name it. Plus the fun part, you can use your keyboard, or even better a midi controller to toggle between scenes.

High class framing!

Here are a couple rather stylish frame options

easter frame
urban frame

Get your midi on

I had an old set of Novation Dicers at home so wondered about whether these could come in handy. I plugged them into my streaming computer and needed to wire up a few things.

  1. In OBS setup multiple scenes
  2. In the OBS settings, assign a key to each scene (Hotkeys)
    1. You can test switching scene off the keyboard. I simply mapped 1,2,3,4,5 to each scene respectively
    2. This might be enough for you – but if you want the midi control read on…
  3. Plug in your midi controller
  4. Install midikey2key – this took a bit of getting used to – I’d recommend:
    1. Pressing ‘start’ in Start/Stop listening
    2. Turn on ‘Log to window’
    3. Turning on all ‘Channel listeners’
    4. Press the buttons on your midi controller
    5. When the signal shows in the log window, double click and assign the output you want
    6. For my dicers, this was also mapped to keyboard buttons 1,2,3,4,5
    7. Test it all πŸ™‚
midi software

Here is the mapping config I used: dicer.txt (rename to ini)

Just use your phone

#LiveDroid allows you setup your android phone as a webcam. When you fire up the app and share the screen, it will stream the output via your network. Add this as a Url source in OBS.

If for some reason this doesn’t show, test the url in a browser. If OBS then doesn’t show it, double click the Url Source and hit the reset button.

Multiple streams = more viewers?!?!

Well not quite, but using something like restream you can submit one output from OBS but then broadcast to multiple platforms.

Gotchas

All too easy huh! Well there are a few things to be careful of with all this:

  1. Midi presses have down and up events logged into midikey2key. I’d recommend just using one or the other, not both events to map key presses to otherwise you get some funky flashing behaviours.
  2. Test it all, including the sound!
    1. The first stream we did, the audio was sourced through a webcam so you could hear everything we said, cross fader clicks etc.
    2. The second stream we did, the audio doubled up from the mixer=>line in, and from the PC audio. Make sure that you just have one audio source enabled
  3. Some streaming platforms have rather zealous moderation/copyright restrictions. Chunks of some of our streams have been muted because of this.
    1. You could try pitching things up a semitone in e.g. pitch and time
    2. Or changing the pitch
    3. Or whacking the sampler / air horn until everyone is sick to death of it πŸ™‚

Taking it to the next level

This is only scratching the surface of what you can do on livestreams. Custom overlays are easy to add, multiple feeds/scenes allow more interesting content. But, OBS also supports things like green screens.

DJ Yoda does a weekly instagram show where he is green screened on top of the video output from Serato.

A good friend, DJ Cheeba is doing some very cool green screen demos with the video overlayed on the decks! Check out http://djcheeba.com/lockdown-live-stream/ for some proper nerding out.

Summary

Its really easy to get live streams up and running, all off open source software. Do give it a go, and rememeber – have fun! Some of the best streams I’ve watched have been because they are fun and don’t take themselves too seriously πŸ™‚

Customizing logging in a C# dotnetcore AWS Lambda function

One challenge we hit recently was how to build our dotnetcore lambda functions in a consistent way – in particular how would we approach logging.

A pattern we’ve adopted is to write the core functionality for our functions so that it’s as easy to run from a console app as it is from a lambda. The lambda can then be considered only as the entry point to the functionality.

Serverless Dependency injection

I am sure there are different schools of thought here, should you use a container within a serverless function or not? For this post the design assumes you do make use of the Microsoft DependencyInjection libraries.

Setting up your projects

Based on the design mentioned above, ie you can run from functionality as easily from a Console App as you can a lambda, I often setup the following projects:

  • Project.ActualFunctionality (e.g. SnsDemo.Publisher)
  • Project.ActualFunctionality.ConsoleApp (e.g. SnsDemo.Publisher.ConsoleApp)
  • Project.ActualFunctionality.Lambda (e.g. SnsDemo.Publisher.Lambda)

The actual functionality lives in the top project and is shared with both other projects. Dependency injection, and AWS profiles are used to run the functionality locally.

The actual functionality

Let’s assume the functionality for your function does something simple like pushing messages into an SQS queue

The console app version

It’s pretty simple to get DI working in a dotnetcore console app

The lambda version

This looks very similar to the console version

The really interesting bit to take note of is: .AddLogging(a => a.AddProvider(new CustomLambdaLogProvider(context.Logger)))

In the actual functionality we can log in many ways:

To make things lambda agnostic I’d argue injecting ILogger<Type> and then _logger.LogInformation(“_logger Messages sent”); is the preferred option.

Customizing the logger

It’s simple to customize the dotnetcore logging framework – for this demo I setup 2 things. The CustomLambdaLogProvider and the CustomLambdaLogger.

And finally a basic version of the actual logger:

Summary

The aim here is to keep your application code agnostic to where it runs. Using dependency injection we can share core logic between any ‘runner’ e.g. Lambda functions, Azure functions, Console App’s – you name it.

With some small tweaks to the lambda logging calls you can ensure the OTB lambda logger is still used under the hood, but your implementation code can make use of injecting things like ILogger<T> wherever needed πŸ™‚

Automating a multi region deployment with Azure Devops

For a recent project we’ve invested a lot of time into Azure Devops, and in the most part found it a very useful toolset for deploying our code to both Azure and AWS.

When we started on this process, YAML pipelines weren’t available for our source code provider – this meant everything had to be setup manually πŸ™

However, recently this has changed πŸ™‚ This post will run through a few ways you can optimize your release process and automate the whole thing.

First a bit of background and then some actual code examples.

Why YAML?

Setting up your pipelines via the UI is a really good way to quickly prototype things, however what if you need to change these pipelines to mimic deployment features alongside code features. Yaml allows you to keep the pipeline definition in the same codebase as the actual features. You deploy branch XXX and that can be configured differently to branch YYY.

Another benefit, the changes are then visible in your pull requests so validating changes is a lot easier.

Async Jobs

A big optimization we gained was to release to different regions in parallel. Yaml makes this very easy by using Jobs – each job can run on an agent and hence push to multiple regions in parallel.

https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml

Yaml file templates

If you have common functionality you want to duplicate, e.g. ‘Deploy to Eu-West-1’, templates are a good way to split your functionality. They allow you to group logical functionality you want to run multiple times.

https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops

Azure Devops rest API

All of your build/releases can be triggered via the UI portal, however if you want to automate that process I’d suggest looking into the rest API. Via this you can trigger, monitor and administer builds, releases and a whole load more.

We use powershell to orchestrate the process.

https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-5.1

Variables, and variable groups

I have to confess, this syntax feels slightly cumbersome, but it’s very possible to reference variables passed into a specific pipeline along with global variables from groups you setup in the Library section of the portal.

Now, some examples

The root YAML file:

The ‘DeployToRegion’ template:

And finally some powershell to fire it all off:

Happy deploying πŸ™‚

JSS Blog post series

I’ve recently been working with the Marketing team within Valtech to get a series of JSS Blog posts published onto the Valtech site.

If anyone is interested you can access them via https://www.valtech.com/en-gb/insights/going-live-with-jss/

The topics cover things like what it’s like to move from being a traditional Sitecore dev to a JSS dev, how to get everything deployed, any gotchas we didn’t estimate for when we started and some key design decisions we made along the way.

I hope you find them useful πŸ™‚

Setting a row colour in powershell | Format-Table

This is quite a quick post, but a useful tip. If you are setting up some data in powershell which you then fire at the console via | Format-Table it can be useful to highlight specific rows.

Imagine you have a hashtable with the key as a string, and the value as a number. When you send to the console you will see the names and values set in a table.

Now if you want to set John to be a certain colour then you can use the code below.
Note for static values this doesn’t add much value, we use it for a table that is getting printed dynamically e.g. based on a timer tick and dynamic version of the Name

This requires PowerShell 5.1 or later (check with $PSVersionTable.PSVersion) and doesn’t seem to play fair with the PowerShell ISE, however from a normal PowerShell window or VSCode it works a charm.

Happy colouring πŸ™‚