Debugging Sitecore Marketing automation

Here are a few tips and tricks that should help if you want to start developing custom activities within the new Sitecore Marketing automation engine.

The docs

There is a pretty extensive guide on the Sitecore docs site: https://doc.sitecore.net/developers/xp/marketing-automation/activities/activity-types/create-an-activity-type.html

Alternatively have a look at these really useful 4 blog posts: https://www.brimit.com/blog/sitecore-9-custom-marketing-automation-action

Debugging your code

There are a few ways to run the engine – by default it gets installed as a windows service however if you navigate to:

{xconnectdeployment}\App_data\jobs\continuous\AutomationEngine there is an exe (maengine.exe) you can run instead. Your code deployment will involve copying dll’s from your solution into that same folder.

To debug, run the console app and then attach to the maengine process within visual studio. When the plans run, and your activity is triggered, you should see breakpoints kick in. 

If you want to make attaching to the engine simpler, in Visual Studio ‘Add existing project‘ => Select the maengine.exe on disk => Right click ‘Debug => Start new instance

Some developer tips

If you are getting started and can’t seem to get your code to run, try setting up a very simple plan with a loose trigger. An example trigger, where the month is October

Also, check the custom config is in place to link your custom code with the GUID of the new activity within Sitecore. This needs to live in 
{xconnectdeployment}\App_data\jobs\continuous \AutomationEngine\App_Data\Config\sitecore\PatchFolder  – you can call the PatchFolder what you need.

And finally, if you want to see what Sitecore is storing under the hood when you save a plan, they live in /sitecore/system/Marketing Control Panel/Automation Plans. You can always un-bucket the folder to see each plan.

Sitecore Symposium 2018 – Welcome to Sitecore 9.1

As the dust settles after this years annual Sitecore Symposium, I thought it would be good to share a few highlights – in particular what 9.1 has in store. 

Sitecore 9.1

The big news this year was around the imminent release of Sitecore 9.1 and all the neat new features it offers.

  • JSS – Javascript services – The layout service and GraphQL integration
  • Cortex – A machine learning integration built into the xDB & personalization layers
  • Universal tracker – A way of sending data into xDB from any source
  • Sitecore Host – The new underlying frameworks for things like Sitecore Identity
  • Sitecore Identity – An integration with Identity Server to provide Single Sign On througout many apps. Eventually providing the ability to remove reliance on the Membership tables
  • Simplified and streamlined deployment packages for XM deploys – one goal here, to really help with startup times in the cloud

1. JSS

If you managed to miss any information on JSS which sessions did you attend?!?! From the keynote to many breakout sessions, JSS and the headless revolution was one of the hot topics this year! It was announced that as of 9.1 it will no longer be in Technical Preview. All developers should now have many new API’s to call on if needed, including a GraphQL endpoint over the Sitecore DB’s. This opens up really interesting possibilities for the reach of personalized Sitecore content – you can easily consume any page or content within non-sitecore web sites, mobile apps, console apps – you name it.

One highlight, Alex Shyba took everyone through the whole workflow of designing a page in react studio => deployable connected app, all in 40 mins!

2. The Universal tracker

If you look at the way Sitecore are approaching the move to things like dotnet core, it’s very much around splicing out functional areas of the application. Some examples are things like xConnect, the publishing service & marketing automation.

The universal tracker is another example – a new scalable micro service that allows analytic data from any source to be sent into xDB. You can add multiple databases to buffer data before it gets crunched and sent to xDB. There are pipelines that run during this process so if you need to enrich any events or interactions it can be done.

From your external app/system you simply need to concoct some rest calls into the new service. ATM this only allows pushing data into xDB, there is no concept of GET’s yet.

3. Sitecore Host

Underpinning a lot of the new apps that Sitecore are developing is the Sitecore Host layer, a new dotnet core framework. The goal is to provide common features such as logging, messaging, config management, dependency injection and a raft more that can be shared throughout other consumers.

A working example of this is the new Sitecore Identity layer which allows Single Sign On built off Identity Server.

What’s really nice about this approach is your apps & features can cherry pick the required functionality they want. Then, if a specific feature doesn’t exist you can include custom plugins to build out your own implementations.

What does the future hold?

The new framework is all based around dotnet core which opens up very interesting possibilities around containerization, hosting and how you choose to run your application. Who knows, in the future we may no longer need to run your Sitecore stack on the same hosting, cloud provider or operating system!

Inspiring your customer – a new look for Inspire Me

We’ve recently finished the first phase of a redesign project for easyJet to rebuild their inspireme section of the site. An example of the new feature can be seen at https://www.easyjet.com/en/inspireme.

The aim of this post is to describe a few of the ways we’ve achieved this.

All the content for this section is being pulled from Sitecore via some custom WebApi endpoints. This allows us to pull together content and media from Sitecore and enrich with almost realtime pricing data from an import routine we run in the background.

The grid

This was one of the biggest changes. Not only from a technical and UI perspective, but also from a content perspective.

The content challenge

easyJet serve content in over 15 languages so curating content for 150+ destinations takes a lot of time.

We had a few options for where content was updated – in the end we decided to install the templates for the new fields into their production CMS so content could be uploaded. One downside, we had to sync the prod database into our test environments regularly to ensure they could check the content worked as expected.

The grid layout

Depending on the width of the viewport we needed to vary the number of tiles per row. This was because the max image size we could access was 550px. It uses a masonry grid with some customizations around the pattern of tiles we ask it to render.

This gives us the ability to generate a ‘random’ layout but within some pre-defined boundaries. Within the code there are some pre-canned row configurations that allow us to render combinations of large and small cells depending on the screen width.

The tile transitions

These are all achieved with a combination of CSS transitions and a very light smattering of jQuery. I won’t try to explain all the transitions but recommend researching things like

if you want to start dabbling.

Lazy image loading

In order to prevent your browser downloading hundreds of images when you first load the page, the site makes use of a technique mentioned here.
You can see the delay in the images loading in the timeline section on the right of the following image.

With the polyfill it even works in older internet explorer!

Cell loading

The grid cells appear to become visible in order:

This is achieved by delaying the visibility of each cell. As we push cells into the masonry plugin, they are set to be visibility:hidden;opacity:0;.
Depending on their cell index, we then offset how soon until we transition the opacity to 1. Simples 🙂

Feeding content into Angular

A lot of the UI throughout easyJet is built in Angular. There are many options for how you send Sitecore content to your UI. To avoid any lag or delay from ajax calls, we build up the content into JSON which is them embedded into the page markup – hence reducing the need for any additional requests.

Summary

If you plan to take on a rich UI project, in particular one that builds on a lot of javascript then Typescript is your friend! The way you approach your code will be much more similar to C#, plus you can actually com(trans)pile the whole thing.

Hope you like the new look and feel of inspireme. It was very interesting to work on something with such a UI focus that still needs to integrate with Sitecore.

Getting personal with Alexa

It’s only a couple weeks now until Sugcon Europe – a definite highlight for any budding Sitecore developers. There are two days of amazing sessions lined up from a mixture of Sitecore employees and community members.

This year I’ve put together a talk all about integrating different channels with Sitecore – in this case Alexa. What better way to demonstrate the concept than to build a skill.

If you want to find out about the sessions, speakers and so on, you can download the skill for free at: https://www.amazon.co.uk/dp/B07C35NBYF/ref=sr_1_1

My particular favourite intent: play the speaker lottery.

It highlights some interesting challenges around creating chat interfaces, each of which will be covered in my talk at Sugcon. To hear the full talk swing by the Main Stage around 13:45 on Tuesday 🙂

A couple teasers:

  • Context is king and why does yes matter so much?
  • Why personalizing the content can have such positive, or negative results

All the source code is available at https://bitbucket.org/boro2g/sugconalexa including a crude scraper to gather the info it needs from the Sugcon site.

Serving personalized content as JSON from Sitecore

As with many tools and approaches to solving technical issues, you can often find many ways to achieve the same output. The challenge I was up against was how to serve personalized content as JSON when being served from Sitecore.

The solution below is one way you can achieve this, undoubtedly there are many more!

The over-arching setup

Think of your JSON feed like a Sitecore page. You will need to break a rule of REST as Sitecore personalization requires session, and therefore isn’t stateless. This will need to be reflected in your consuming app, if you don’t provide an identifier every request it will need to understand and persist cookies between requests.

First up you need to select a device, a layout and some renderings. None of this differs to normal Sitecore development. I’ve found for debugging purposes using a new device works better as you can view the content as a web page as well as JSON.

The layout

Assuming you’ve decided on a device, you will need to setup a layout:

The renderings
Again, much like you would for a page, you can create e.g. Controller Renderings which output the content as you need. One thing to note, these will want to render as JSON e.g.:

These components can have datasources setup as normal, and hence personalization is available into your JSON feed.

Via a browser you would then load the url as normal, remembering to specify the device you’ve selected and you should see content based off rules, user information and behaviours and more.

Taking it to the next level
The simpler approach assumes you have one component per page. Complexity comes in when you need to generate valid JSON based off multiple controls – this can be achieved but requires you to either configure things via rendering parameters or at the point the page is rendered, interrogate the counterpart presentation components and work out whether you have sibling controls.

If you find you have sibling components to render you’d need to add commas after your controls to ensure valid JSON.

Who makes the tea? Why not ask Alexa?

Following on from the previous post, AlexaCore, this post will explain some of the challenges you might encounter when launching Alexa Skills into the store. It will also cover some cool things you can do if you want to enrich the feedback users receive as they use their skill.

Time for a brew?

If you need to solve your debate of who makes the tea why not check out Tea Round, recently enabled on the Skill Store.


First up, how can you find skills?

They are all available on the Amazon site, or accessible via the alexa.amazon.com portal.

Running test versions of your skill

This is pretty straight forwards. You need to login to developer.amazon.com, run through the wizard to create a skill which includes pairing up with an AWS Lambda (or controller endpoint). You should then see the newly created skill in your Alexa app but marked with a ‘dev’ flag.

Testing your skill

You have a few options, either you can simply talk to your Alexa with voice commands or use the text test tool within the developer.amazon.com console.

Getting certified gotcha’s

The certification process looks to validate and check a few things:

  • Are there bugs in the skill?
  • Do the descriptions and prompts align with your skill’s intents?
  • Do you leave the users hanging?

Things that caught me out during this process were:

  • Testing the skill where you skip past the launch intent
    • E.g. rather than asking ‘Alexa, open the tea round’ and then allowing the LaunchIntent to run, you can ask ‘Alexa, open the tea round and spin the wheel’. My logic around initializing the session originally ran in the LaunchIntent, some simple refactoring resolved this.
  • Leaving the users hanging – in my opinion this isn’t great UX but rules are rules
    • If you respond to a user without a prompt e.g. without a request of the user, the rules define you should end the session. My AddIntent would respond to ‘Add Nick’ with ‘Ok, Nick is now in the team’. To get past the certification it needed updating to ‘Ok, Nick is now in the team. Why not spin the wheel?’
  • Make sure the suggested prompts you include match up to the text set in your intents. The best bet here is to look at other skills and see how they phrase the prompts.

Saving data beyond a session

Much like a session in a web request, for the context of the lifetime of a skill a session gets persisted. This can be used to store anything you want, some simple examples would be an array of the names of the people in the Tea Round. That’s fine, but next time someone loads the skill, the session will be empty and the user will need to re-add each user – with all the extra questioning needed for certification this could get painful.

AWS provide a document model DB, Dynamo, that’s very well suited to this kind of thing. The Tea Round stores the permanent team in Dynamo, updated from the Lambda function that sits behind the skill.

Understanding users names

This can be tricky as subtle variations behind names can lead to them being spelt, and pronounced very differently – especially when regional dialect comes into play. The best success I’ve found is to provide as broad a set of example names as possible when setting up your {slots} in the intent.

Enriching the responses

A typical request & response cycle will send information that Amazon decodes from speech into your Lambda function. From there you can then return text that gets read out by your Alexa. By using Sitecore as a headless backend, the response text can be driven from the CMS – some recent updates to AlexaCore provide helpers for making these requests.

Where things then get interesting are that you can personalize messaging based off behaviour and user interactions. Big brother is watching?!?!

Closing thoughts

Gasp, after all that I’m thirsty – time for a brew! (how English eh :))

If you fancy allowing Alexa into your tea making process, have a look at Tea Round

Newtonsoft – deserializing POCO objects that contain interfaces

Just a quick post here but hopefully helpful if you hit the same issue. If you are dealing with json serialization, newtonsoft is one of our goto options.

When deserializing json to poco’s, sometimes the structure of your poco’s require a bit of extra setup to play fair with newtonsoft. Consider the following objects:

So when this gets serialized, you’d end up with:

If you then try to deserialize:

then you will get an exception.

The solution is to setup a converter:

Which then gets passed into the deserialize call e.g.:

Then you can simply add as many InterfaceConverter’s as you need

Sinj – scripted Sitecore changes

This post follows on from Migrating Sitecore content and looks to explain the way we migrate content between different environments.

Sinj is the framework we’ve put in place to facilitate a re-playable and scripted approach to any Sitecore changes. It enables you to create changesets via JSON/Javascript which then get run against a Sitecore authoring environment and any counterpart publishing targets. The code can be found at https://github.com/tcuk/sinj.

For any setup instructions have a look a the wiki on github. It also contains examples of the different kinds of operations you can run against your content. As the js layer talks into the Sitecore API, if you find there is an operation you need to perform but can’t it can simply be extended to add the desired functionality.

An example script for creating a template would be:

Now this may seem overly verbose however there are easy ways to speed up the JSON generation. I’ll cover these in a subsequent post.

For me the real advantages we gain from this approach are:

  • Changes are as granular as you want – they can be applied to specific fields in specific languages on specific versions if desired.
    • However, if you want to update all languages in one go, its simply a case of iterating through each language in a for loop
  • Bulk sets of changes can be applied in one go. By simply including all the JS files you wish to deploy in one folder then all will get applied in sequence
  • You can run the scripts to any database, avoiding any need to publish scattered areas of the tree
  • Changesets can be replayable and don’t have the somewhat confusing concept of: overwrite/merge (and it’s options)
  • You can query the Sitecore tree to gather data to feed into other updates

In the next post I’ll show some examples of how creating Sinj scripts can become a lot simpler…

Migrating Sitecore content

It’s something everyone’s had to do at some point. How can you best migrate content from your dev machine to other environments e.g. qa / uat / live etc.

Before we carry on let’s define exactly what we mean by content. I’m not talking about files here, let tools designed for deploying files handle that problem. By content I mean Sitecore items. These can be broken into two key areas:

  • content owned by content editors
    • typically items under /sitecore/content and /sitecore/media library
  • content owned by developers
    • typically items under /sitecore/layout, /sitecore/system and /sitecore/templates

In theory both can be migrated in exactly the same way – it’s out the box and is called packages. Via the desktop you can decide which items to package up and simply download a package file. The counterpart is you then install this package in other environments and you’ve migrated your content.

So, why a blog post on something you get for free?

A lot of the information here is based on experiences of migrating content between lots of environments, that’s been edited by lots of people in several different places. Trust me, packages work but can become somewhat cumbersome and error prone as you scale things up.

Some key factors it’s worth considering for the following discussion:

  • How easy is it to build up your changeset?
  • How quick is it to install these changes?
  • How easy is it to see which versions of your changeset have been installed?
  • What happens if you get a ‘merge’ conflict?
    • E.g. what happens if an item in your changeset has been updated in the destination environment
  • How easily can your changeset be source-controlled?
  • How easily can your changeset be updated / versioned?
  • How easy is it to deploy your changeset to your publishing targets?
  • Can the installation be clever?
    • E.g. could you add logic to the install process or even base content it installs on existing content

What options do you have?

Note, this list isn’t meant to be exhaustive so apologies if you think items are missing – it’s aim is to highlight answers to the list of questions above. Several tools solve these issues in similar ways. The pros and cons are based on field experience of using each.

Sitecore packages:

These are available to use in an out the box Sitecore installation. Creating packages can be based on cherry picking the items from a specific database you know have changed or basing on some dynamic rules (e.g. what’s changed recently).

Pros:

  • They are out the box and quick to get started with
  • You can open the output zip and peer in
  • It’s possible to save an xml file which represents the full content of a package

Cons:

  • Source controlling their content is tricky as they are output as a zip file
  • It can be tricky to get clear visibility of which version of packages have been installed to specific environments
  • Content still requires publishing if installed into master
  • Keeping them up-to-date with changes, especially with a large team can be laborious
  • Validating their content is slow
  • Installing lots of packages in one go is a painful process
  • Install options are somewhat unclear:
    • Overwrite can nuke existing content
    • Merge – does anyone really understand the 3 options?
  • Field level updates aren’t possible

Sitecore update files:

Much like packages update files store a form of serialized content in zip files. There isn’t a way to generate update files out the box so I won’t dwell too much on this option. IMO they suffer many of the same issues as packages.

FYI TDS allows you to generate these files. 

Pros:

  • Partial item updates can be achieved
  • You get detailed installation history and (undocumented) rollback options in the /temp folder

Cons:

  • You can’t simply generate this type of file

Unicorn / TDS:

Unicorn and TDS take a slightly different approach in that they store a view of the world in your solution. Both rely on serialization to generate a view of configurable areas of the tree, Unicorn diverting slightly by using a custom yaml format for its files.

Installing each is slightly different: Unicorn hosts a custom page that allows manual or automatic syncing of files, TDS allows you to generate update files.

I’d argue both these approaches suit developer content well, I’ve struggled storing large amounts of content editor content in both.

Pros:

  • Source control is your view of the truth – items can be branched & merged along with your code
  • The deployment process can be automated

Cons:

  • TDS does come with an additional cost
  • Deploying to all publishing targets requires the changeset to include content configured for each target
  • In TDS building more complex installation rules is possible however difficult to visualize (note, I’ve not used the product for a couple years now, this may well be better)
    • Examples in mind would be: sync once, field level configuration

Scripting your changes:

You build up custom scripts / helper pages / ???? to allow changes to be made via the Sitecore api’s (or database if you are feeling particularly Chuck Norris). Let’s assume we have a means for scripting these changes via some some json configuration (see the summary :)).

Pros:

  • If done right you get an easily re-playable process that can update content in any publishing target
  • All the scripts are source-controlled
  • Scripts can base decisions on existing content
  • Scripts can be as granular or as course as you want – bulk updates on multiple items vs single field updates on specific items

Cons:

  • Every change requires ‘scripting’
  • A considerable shift in approach is required
  • A raft of external tooling is required to facilitate generating and installing scripts

Summary, or should it be sales pitch?

We use the last approach across most dev teams here so would be used for countless deployments per day. For us it works and is infinitely re-playable. Think of it like advanced config transforms for your content.

I’ll write up more details on the specifics of sinj in a later post – to get started have a look at https://github.com/tcuk/sinj

I’m hoping the information above gets you thinking – just because certain tools exist doesn’t always guarantee they are the best for the job!

bad idea