Sitecore forms – custom form element save issue

In a recent project we needed to add some richer functionality to a form, so decided to wrap it up in a custom Vue.js component which we could then integrate into Sitecore forms. Sounds simple right?

Building the component

Sitecore provides some good documentation on how to build different flavours of form rows – have a look at the walkthrough’s in https://doc.sitecore.com/developers/93/sitecore-experience-manager/en/sitecore-forms.html if you are interested.

Saving your data

Assuming you want to build something a bit richer than the demo video component, chances are you want to actually store data that a user provides. In our use case, we used Vue.js to update a hidden input – under the hood we then save that data into the DB and also ping off to other save actions.

Simples? Well, not quite – unless you know where to set things up.

Configuring the form row

In Sitecore forms, a custom form row needs a few things. A template in master to represent the configuration of the form row, and a set of items in core to represent the UI for configuring the form row.

https://doc.sitecore.com/developers/93/sitecore-experience-manager/en/walkthrough–creating-a-custom-form-element.html

The importance of AllowSave

This is the key bit, and took a fair amount of digging to find. I could see my custom data was being posted back to Sitecore, with the right data. But, it was never getting saved in the database πŸ™

To fix I needed to make sure that both the configuration in core and my custom template had AllowSave available.

  • In core, under ‘/sitecore/client/Applications/FormsBuilder/Components/Layouts/PropertyGridForm/PageSettings/Settings’ you create your custom configuration including sub-items based off the template ‘FormSection’ (see ‘/sitecore/client/Applications/FormsBuilder/Components/Layouts/PropertyGridForm/PageSettings/Settings/SingleLineText/Advanced’ for reference’
    • Here is where you need to ensure you include ‘AllowSave‘ in the ‘ControlDefinitions‘ field for your custom item
    • This is enough to get the checkbox showing in the form builder ui, but not enough to get everything working
  • In master, under ‘/sitecore/templates/System/Forms/Fields’ you create the template to represent the configuration data being saved for your form element
    • Here is where you need to make sure the base templates contains ‘Save Settings

Summary

Setting up a custom form row / element is generally pretty simple. However, the documentation doesn’t cover quite a key step – saving the data. It doesn’t take much additional configuration as long as you know the right place to make changes!

Happy saving.

Livestreaming with Serato, OBS, multiple webcams and a set of novation dicers

In the immortal words of Monty Python… And now for something completely different!

I wanted to have a play with some livestreaming so we could host a virtual birthday party for my wife. In the current climate this is all the rage so figured it should be pretty straight forwards to setup. But, how could we make it a bit more interactive?

action shot

Getting started

It’s really quick and easy to start streaming as long as you have a few things setup:

  • Good internet – you don’t want things cutting out because your bandwidth can’t cut it
  • Some streaming software – I’d recommend OBS
  • Some form of webcam. A lot of online retailers seem sold out of webcams atm so you could always use your phone
    • I found #LiveDroid worked really well on my android phone
  • A platform, or multiple platforms to stream onto e.g. facebook, twitch, mixcloud live etc.

With all this setup you should be able to test and get your stream on. Nice!

Stepping things up a bit

Good start – you can now get streaming! But, lets have a bit more fun. What about if you want to show multiple cameras, or add fancy graphics, split up the stream UI or stream to multiple platforms all at once?

First up, I’d really recommend getting to know OBS – in particular how the different input sources can be used and layered. This allows you to mix and match multiple cameras, graphics and chunks of screen into one scene.

Next, look at scenes – here you can show different combinations of cameras/overlays/sources. You name it. Plus the fun part, you can use your keyboard, or even better a midi controller to toggle between scenes.

High class framing!

Here are a couple rather stylish frame options

easter frame
urban frame

Get your midi on

I had an old set of Novation Dicers at home so wondered about whether these could come in handy. I plugged them into my streaming computer and needed to wire up a few things.

  1. In OBS setup multiple scenes
  2. In the OBS settings, assign a key to each scene (Hotkeys)
    1. You can test switching scene off the keyboard. I simply mapped 1,2,3,4,5 to each scene respectively
    2. This might be enough for you – but if you want the midi control read on…
  3. Plug in your midi controller
  4. Install midikey2key – this took a bit of getting used to – I’d recommend:
    1. Pressing ‘start’ in Start/Stop listening
    2. Turn on ‘Log to window’
    3. Turning on all ‘Channel listeners’
    4. Press the buttons on your midi controller
    5. When the signal shows in the log window, double click and assign the output you want
    6. For my dicers, this was also mapped to keyboard buttons 1,2,3,4,5
    7. Test it all πŸ™‚
midi software

Here is the mapping config I used: dicer.txt (rename to ini)

Just use your phone

#LiveDroid allows you setup your android phone as a webcam. When you fire up the app and share the screen, it will stream the output via your network. Add this as a Url source in OBS.

If for some reason this doesn’t show, test the url in a browser. If OBS then doesn’t show it, double click the Url Source and hit the reset button.

Multiple streams = more viewers?!?!

Well not quite, but using something like restream you can submit one output from OBS but then broadcast to multiple platforms.

Gotchas

All too easy huh! Well there are a few things to be careful of with all this:

  1. Midi presses have down and up events logged into midikey2key. I’d recommend just using one or the other, not both events to map key presses to otherwise you get some funky flashing behaviours.
  2. Test it all, including the sound!
    1. The first stream we did, the audio was sourced through a webcam so you could hear everything we said, cross fader clicks etc.
    2. The second stream we did, the audio doubled up from the mixer=>line in, and from the PC audio. Make sure that you just have one audio source enabled
  3. Some streaming platforms have rather zealous moderation/copyright restrictions. Chunks of some of our streams have been muted because of this.
    1. You could try pitching things up a semitone in e.g. pitch and time
    2. Or changing the pitch
    3. Or whacking the sampler / air horn until everyone is sick to death of it πŸ™‚

Taking it to the next level

This is only scratching the surface of what you can do on livestreams. Custom overlays are easy to add, multiple feeds/scenes allow more interesting content. But, OBS also supports things like green screens.

DJ Yoda does a weekly instagram show where he is green screened on top of the video output from Serato.

A good friend, DJ Cheeba is doing some very cool green screen demos with the video overlayed on the decks! Check out http://djcheeba.com/lockdown-live-stream/ for some proper nerding out.

Summary

Its really easy to get live streams up and running, all off open source software. Do give it a go, and rememeber – have fun! Some of the best streams I’ve watched have been because they are fun and don’t take themselves too seriously πŸ™‚

Customizing logging in a C# dotnetcore AWS Lambda function

One challenge we hit recently was how to build our dotnetcore lambda functions in a consistent way – in particular how would we approach logging.

A pattern we’ve adopted is to write the core functionality for our functions so that it’s as easy to run from a console app as it is from a lambda. The lambda can then be considered only as the entry point to the functionality.

Serverless Dependency injection

I am sure there are different schools of thought here, should you use a container within a serverless function or not? For this post the design assumes you do make use of the Microsoft DependencyInjection libraries.

Setting up your projects

Based on the design mentioned above, ie you can run from functionality as easily from a Console App as you can a lambda, I often setup the following projects:

  • Project.ActualFunctionality (e.g. SnsDemo.Publisher)
  • Project.ActualFunctionality.ConsoleApp (e.g. SnsDemo.Publisher.ConsoleApp)
  • Project.ActualFunctionality.Lambda (e.g. SnsDemo.Publisher.Lambda)

The actual functionality lives in the top project and is shared with both other projects. Dependency injection, and AWS profiles are used to run the functionality locally.

The actual functionality

Let’s assume the functionality for your function does something simple like pushing messages into an SQS queue

The console app version

It’s pretty simple to get DI working in a dotnetcore console app

The lambda version

This looks very similar to the console version

The really interesting bit to take note of is: .AddLogging(a => a.AddProvider(new CustomLambdaLogProvider(context.Logger)))

In the actual functionality we can log in many ways:

To make things lambda agnostic I’d argue injecting ILogger<Type> and then _logger.LogInformation(“_logger Messages sent”); is the preferred option.

Customizing the logger

It’s simple to customize the dotnetcore logging framework – for this demo I setup 2 things. The CustomLambdaLogProvider and the CustomLambdaLogger.

And finally a basic version of the actual logger:

Summary

The aim here is to keep your application code agnostic to where it runs. Using dependency injection we can share core logic between any ‘runner’ e.g. Lambda functions, Azure functions, Console App’s – you name it.

With some small tweaks to the lambda logging calls you can ensure the OTB lambda logger is still used under the hood, but your implementation code can make use of injecting things like ILogger<T> wherever needed πŸ™‚

Automating a multi region deployment with Azure Devops

For a recent project we’ve invested a lot of time into Azure Devops, and in the most part found it a very useful toolset for deploying our code to both Azure and AWS.

When we started on this process, YAML pipelines weren’t available for our source code provider – this meant everything had to be setup manually πŸ™

However, recently this has changed πŸ™‚ This post will run through a few ways you can optimize your release process and automate the whole thing.

First a bit of background and then some actual code examples.

Why YAML?

Setting up your pipelines via the UI is a really good way to quickly prototype things, however what if you need to change these pipelines to mimic deployment features alongside code features. Yaml allows you to keep the pipeline definition in the same codebase as the actual features. You deploy branch XXX and that can be configured differently to branch YYY.

Another benefit, the changes are then visible in your pull requests so validating changes is a lot easier.

Async Jobs

A big optimization we gained was to release to different regions in parallel. Yaml makes this very easy by using Jobs – each job can run on an agent and hence push to multiple regions in parallel.

https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml

Yaml file templates

If you have common functionality you want to duplicate, e.g. ‘Deploy to Eu-West-1’, templates are a good way to split your functionality. They allow you to group logical functionality you want to run multiple times.

https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops

Azure Devops rest API

All of your build/releases can be triggered via the UI portal, however if you want to automate that process I’d suggest looking into the rest API. Via this you can trigger, monitor and administer builds, releases and a whole load more.

We use powershell to orchestrate the process.

https://docs.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-5.1

Variables, and variable groups

I have to confess, this syntax feels slightly cumbersome, but it’s very possible to reference variables passed into a specific pipeline along with global variables from groups you setup in the Library section of the portal.

Now, some examples

The root YAML file:

The ‘DeployToRegion’ template:

And finally some powershell to fire it all off:

Happy deploying πŸ™‚

JSS Blog post series

I’ve recently been working with the Marketing team within Valtech to get a series of JSS Blog posts published onto the Valtech site.

If anyone is interested you can access them via https://www.valtech.com/en-gb/insights/going-live-with-jss/

The topics cover things like what it’s like to move from being a traditional Sitecore dev to a JSS dev, how to get everything deployed, any gotchas we didn’t estimate for when we started and some key design decisions we made along the way.

I hope you find them useful πŸ™‚

Setting a row colour in powershell | Format-Table

This is quite a quick post, but a useful tip. If you are setting up some data in powershell which you then fire at the console via | Format-Table it can be useful to highlight specific rows.

Imagine you have a hashtable with the key as a string, and the value as a number. When you send to the console you will see the names and values set in a table.

Now if you want to set John to be a certain colour then you can use the code below.
Note for static values this doesn’t add much value, we use it for a table that is getting printed dynamically e.g. based on a timer tick and dynamic version of the Name

This requires PowerShell 5.1 or later (check with $PSVersionTable.PSVersion) and doesn’t seem to play fair with the PowerShell ISE, however from a normal PowerShell window or VSCode it works a charm.

Happy colouring πŸ™‚

Build yourself a JMeter load testing server

As you come close to launching your new web application, whether it be Sitecore, Node or plain ol’ HTML, it’s always good to validate how well it performs.

The cloud opens up lots of possibilities for how to approach this – including lots of online LTAAS (er, is load test as a service even a thing!?!? :))

Iteration 1 – LTAAS with Azure Devops

We are using Azure Devops within our current project, so thought it be good to give their load testing features a blast. This came with mixed success, and a mixed $$$ cost.

Pro’s

  • You don’t need to manage any of the kit
  • The sky’s the limit with the amount of concurrent machines to run (< 25)
  • It supports various methods for building a script

Con’s

  • The feedback loop can feel slow
  • You get limited support for JMeter scripts, and limited graph’s of your results. Note, this could be due to inexperience with the tool
  • It costs per minute of load test you run. We managed to un-wittingly rack up quite a substantial bill with a misconfigured script.

Iteration 2 – DIY

Another approach is that you actually setup the infrastructure yourself. For our capacity and requirements this ended up being a much more favourable option – once we’d managed to get the most out of our kit.

Pro’s

  • Assuming you use JMeter, you can quickly iterate through tests and get a wide spread of results as you go
  • If you need more grunt, you can always increase the box spec’s

Con’s

  • You need to tune the box to get the most out of it
  • Large boxes in e.g. AWS cost $$$

Configuring things yourself

Here are a few steps to follow if you really want to max out your load test box, as well as you web infrastructure:

  • Pick a box with plenty of RAM – we opted for and AWS
    r5.2xlarge – 8 core and 64GB RAM
  • Ensure JMeter can use all the RAM it can. Within JMeter.bat you can set the heap size available to the program – by default this is 512mb. If you add set HEAP=-Xms256m -Xmx60g then JMeter will sap up all 60GB of RAM it can
  • Ensure Windows can use as many TCPIP connections as possible. Again, by default this is quite low. You need to set 2 registry keys – see http://docs.testplant.com/epp/9.0.0/ePP/advovercoming_tcpip_connection_li.htm for more details.
    • Until we’d set these values, our tests would bomb out after a couple minutes as the box simply couldn’t connect to our website any more.

Other tips

JMeter has some really good plugins for modelling load, in particular around step’d load and realtime visualization of results.

I’d recommend checking out:

Some good additional reading

https://www.blazemeter.com/blog/9-easy-solutions-jmeter-load-test-%E2%80%9Cout-memory%E2%80%9D-failure

Happy testing! πŸ™‚

Setting up JSS with Vue, Typescript and dependency injection

If JSS is a new term for you, I’d seriously recommend checking our the documentation that Sitecore have provided: https://jss.sitecore.com/ .

By the end of this post we’ll have run through how you can get JSS up and running locally, with dependencies all wired together using a DI container and any functional aspects written in TypeScript. For us this is a key set of requirements – we’ve worked with many projects that have grown over several years. By putting in some key rules and requirements up front should mean with good discipline that the codebase can scale over time.

Why JSS?

Imagine a standard Sitecore development team, typically based around C# developers and some front end devs. In the default approach to building a site you’d need everyone to contribute Razor files with the markup and associated styling and functionality. This is the approach you would probably have seen for several years until more recently with the demand for richer front end technologies. Think Vue, Angular, React and so on.

This is what JSS facilitates.

Is this right for everyone?

Just because technologies exist, it doesn’t always make them the right platform to jump on. E.g. if you have a very established Sitecore development team that doesn’t have the appetite for these front end technologies, then JSS might not be the thing for you.

Getting started

The quick start from the docs site provides 4 tasks to get you started:

Provided you have node installed, once you run ^ you should then see http://localhost:3000 fire up.

Why TypeScript?

I wouldn’t consider starting a new web project now without TypeScript as it provides so many useful features for a codebase. Refactoring is just like if you were using C#, variables have types, classes can implement other abstract classes or interfaces. If you’ve not used it before, I’d highly recommend it.

In terms of designing your application, another key factor to consider is the coupling between the different layers. Core functionality being one layer, your UI framework being another. If you structure things so that e.g. you can peel out Vue without too much trouble, moving up through different technologies or versions will be a breeze.

Changes to the default app

Here we’ll add things like some demo services, some DI registrations and a few other bits we’ll need.

1.First up lets include some extra dependencies:

2. In src/AppRoot.vue, before the export default line add import "reflect-metadata"

3. Add a tsconfig.json file to the root folder (a level above src):

4. Update the webpack config, in the Vue world this is done in vue.config.js

5. Now add a vue-shim.d.ts (in the src folder)

6. Next, some dummy TypeScript dependencies:

7. And the DI container and keys:

8. Now a TypeScript enabled Vue component: /src/components/Hello.vue

9. And to finally get it showing on a page, edit layout.vue to include your component:

After all that, you should see the homepage load up and “ServiceA” getting logged to the console. Not the most impressive output but shows the full flow of dependencies getting configured and resolved, with all the code written in TypeScript πŸ™‚

If you are using SSR Renderings, you’ll also need to add |ts into the list of rules that get ‘unshift’ed in /server/server.vue.config.js

Deploying custom code to xConnect and the Marketing Automation Engine

Over the last few years the deployment footprint of a fully functional Sitecore application has shifted hugely. It’s no longer as simple as one database server and a couple web nodes – now you need to consider all kinds of different infrastructure.

What are the different parts of Sitecore 9?

  • xConnect – a separate web application to your main site
  • AutomationEngine – this runs as a windows service
  • IndexWorker – this also runs as a windows service
  • Website – much like the good ol’ days πŸ™‚

Adding your own customizations

It’s pretty simple to setup your own custom facets. However what’s slightly harder is how do you deploy these to all of the different functions above? If the dll’s and configs don’t match between e.g. the website and xConnect you will get errors in the logs – luckily these do a good job of explaining the mismatch.

Sharing the love

In its simplest form the process of deploying your custom facet relies on 2 things – the dll that contains the facet and a json representation of the facets. To generate the JSON try this.

Automate the boring stuff

No one likes doing the same thing again and again, especially if you consider deploying something like this to multiple servers in the cloud.Β 

For a recent demo I built a process that worked both locally and remotely. This was great as the octopus deploy step only had to run one exe and the whole deployment glued together as expected.

Just show me the codez!

Just before we do I’ll quickly explain the steps involved:

  1. Build the code (no shit sherlock)
  2. Write the json schema
  3. Deploy the model config (see the sitecore post earlier about this format)
  4. Deploy the dlls
  5. Deploy the patch configs
  6. Deploy the agent configs. Note these assume you are using Slow Cheetah to transform accordingly for each environment

Before you run it you need to:

  1. Correct the references to things like xConnect etc
  2. Correct the references to the dlls you want to include and set their names in theΒ DeployDlls method (var dllsToCopy)
  3. Set theΒ deploymentFolder

All the source code is available online here

One file of note isΒ sc.MarketingAutomation.ActivityTypes.xml – this allows you to patch in things like custom MA Actions, setup dependency injection within the MAEngine and a whole raft more.