If you read my previous blog post about new hosting options for Optimizely CMS 12+, you will know that it is now possible to run such a site in a container infrastructure. Azure App Service is a container platform for the public cloud.
Some companies may prefer to host their site in a private cloud, maybe using Docker Engine. The goals of this blog post are to walk through:
- Building a scalable container image
- Running the image as a scalable service
- Scaling the service up and down
Preparing for Docker hosting
Hosting a Docker image has some constraints:
- All container instances are temporary. Any changes to the file system will be lost when the instance is shut down.
- Every configuration setting must be added in code, in a configuration file or as environment variable. It is not possible to log in to the instance and change a configuration (like we can Windows/IIS).
- Updates to the application, its framework dependencies or the Linux base image has to be done in a new container image. So, we do not have to maintain or update individual instances.
We also need to consider some differences between cloud hosting and usual self-hosting.
Where do we want the Optimizely blob files (media assets, certain index files etc.)?
All Docker containers are per definition perishable, meaning the container starts from the image and anything changed during the runtime is forgotten when the container is restarted. This is very good for the application files, but we need somewhere to store all the Optimizely media assets (blob files) that editors upload.
On self-hosted Optimizely sites those files are usually stored on a file server and accessed through a shared network folder. On cloud-hosted sites they are usually stored in a blob storage container.
In this article I run all instances on the same machine, so I can easily prepare a data volume on my local machine. Like this:
docker volume create stefanolsen-appdata
But for real hosting, Docker supports mounting CIFS (Windows) and NFS (Linux) shares inside the container OS. So, we can store the files exactly the same way as we did previously.
A CIFS volume can be created like this:
docker volume create \
--driver local \
--opt type=cifs \
--opt o=addr=fileserver.company.local,username=uxxxxxxx,password=*****,file_mode=0777,dir_mode=0777 \
Now the file storage is ready to be mounted and used. More about that later.
How do we propagate event messages between site instances?
Whenever Optimizely persists data changes it sends a cache invalidation message to all participating servers.
For sending Optimizely CMS 12 event messages between on-premise servers, I suggested using UDP sockets in a previous article.
However, Docker Engine does not support sending UDP packets via multicast or broadcast. The closest we can get is sending UDP unicast packets. But then we need to know the hostname or IP address of all server instances in advance. This is quite impractical in a containerized and scalable environment (because the instances do not know about each other in advance). So, we need to use a piece of message broker software.
Such a message broker could be:
- Redis with Pub/Sub
- RabbitMQ using Optimizely’s experimental provider
- Azure Service Bus
- Something else that you build a custom provider for
Building a Docker container image
In order to run an Optimizely CMS 12 site in a Docker environment, the site needs to be built and packed to a special container image. This requires the definion of a Dockerfile, which is like a “build script” in Docker.
I decided to base the experiment on my own blog website, which is made up of a single VS solution containing a single web project. This results in a Dockerfile like this:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
COPY NuGet.config ./
COPY StefanOlsen/*.csproj ./
RUN dotnet restore
COPY StefanOlsen/. ./
RUN dotnet publish -c Release -o /app/out
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS run
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "StefanOlsen.dll"]
What happens here is this:
- Using an SDK version of the container OS, copy all relevant project files into the working directory.
- Run build and publish on the project and store the output in a folder called out.
- Using a smaller runtime version of the container OS, copy the files from out folder into the main folder.
- Run the Kestrel server that hosts the site (as described in my previous blog post).
To build the container image, I then call the following command in a console window (where “stefanolsen” is the name of the new image):
docker build -t stefanolsen .
Now I have a container image on my local Docker instance. In a real setting, this would be done in a CI/CD pipeline. And the container image would be stored in a repository, as described in many other articles.
Deploying to Docker
In the following I assume you already have a Docker Swarm (a number of Docker servers in a cluster) created and initialized.
To run a container image as a clustered, not a regular, container (called a service), call the following command:
docker service create --name stefanolsen-integration --mount source=stefanolsen-appdata-integration,target=/app/appdata -e "ASPNETCORE_ENVIRONMENT=Integration" -p 8080:80 –-replicas 2 stefanolsen
- Notice how I added:
- The data volume to a mount point
- The ASP.NET environment name that Kestrel needs to know
- The port mapping (8080 on host maps to 80 in the container)
- The number of instances (replicas) to start
This will instantly bring up 2 container instances based of the same image for an Integration environment. With some minor modifications, I could also easily spin up Preproduction and Production environments.
Now, imagine I have a container running the Integration site. Then I build the container image again with some changes. Instead of killing the site and starting a new one, I would much rather update it as rolling update. Like this:
docker service update stefanolsen-development
Now Docker will update each instance of the container, one at a time.
Scaling a Docker service
If we need to scale up a Docker service, we can simply call (as an example):
docker service scale stefanolsen-swarm=5
This will increase the number of instances to 5 (or whatever number we enter). To scale down, enter a number smaller than the number of running instances.
So, you see Docker can definitely be a viable hosting platform for an Optimizely site.
Of course, there are some highly technical configurations and considerations that I have skipped over. Such as:
- Installing Docker Engine on a number of servers
- Configuring load balancing in front of Docker hosts
- Performing building and deploying container images from a CI/CD platform
- Those are subjects you need to find answers to, based on your own very specific situation.