Stefan Holm Olsen

Make better and richer logs with Application Insights

Logging is a must for all sites, especially for sites with just a little complexity. I suppose we can all agree on that.

However, collecting, parsing and searching in log files has always been a pain, even with decent log analyzer tools.

To capture and analyze logs, Episerver DXC adds Application Insights to the mix. This is a really good tool. I highly recommend it for both on-premise and non-DXC sites.

Application Insights is a "freemium" cloud service from Microsoft Azure. Free for most users, and cheap for those with lots of log data. It features telemetry collection, indexing and analysis of log data, along with great monitoring, reporting and notification tools. And the user interface is both nice and powerful.

It really is a great platform with a good value for money ratio.

Basic log tracing

Without any special configuration or code, DXC will add a baseline integration to Application Insights. This includes:

  • Dependency tracing
  • Browser tracing
  • Unhandled exceptions
  • General performance tracking

But textual logs, from inside Episerver and from the website code, will not be captured by default. The logs are available from the DXC Service Management Portal (for download and streaming) but not in Application Insights. To also collect the logs there, along with all the performance and failure data, we need to add a NuGet package and some configuration.

To get started, first add the necessary NuGet packages, like this:

Install-Package Microsoft.ApplicationInsights.TraceListener

To have all ILogger events written to Application Insights, add the following entries to your web.config file.

  <trace autoflush="true" indentsize="0">
      <add name="myAppInsightsListener" type="Microsoft.ApplicationInsights.TraceListener.ApplicationInsightsTraceListener, Microsoft.ApplicationInsights.TraceListener" />

When starting the site in DXC, all log entries will now be sent to Application Insights and show up within a few minutes. The same entries will also still show up in the DXC Service Management Portal.

Enriching traces with metrics

This is all nice. Capturing unstructured, human-readable log entries to easily analyze events is very useful. What is also useful is tracing numeric metrics. Those can be used for reporting, charting or for setting up alarms and notifications, in case values goes beyond bounds.

Such metrics could be (examples only):

  • Execution time of a code piece (stopwatch)
  • Number of 404 errors
  • Number of incoming or outgoing API requests, grouped by response status code
  • Number of login attempts, both successful and failed one
  • Number of entities affected or failed in a scheduled job or a Hangfire job
  • Number of products and variants imported, updated or deleted by a PIM integration

These are numbers we would usually log anyway, but as part of the log text. This would be hard to filter and group by, even with a nice log analyzer like Application Insights.

Here is an example in code:

private readonly TelemetryClient _telemetryClient = new TelemetryClient();
public async Task<SignInStatus> Login(string userName, string password)
    Metric loginsMetric = _telemetryClient.GetMetric(new MetricIdentifier("Authentication", "Logins", "Result"));
    // A very naïve login implementation.
    SignInStatus result = await _signInManager.PasswordSignInAsync(userName, password, isPersistent: false, shouldLockout: true);
    switch (result)
        case SignInStatus.Success:
            loginsMetric.TrackValue(1, "Success");
        case SignInStatus.LockedOut:
            loginsMetric.TrackValue(1, "LockedOut");
        case SignInStatus.RequiresVerification:
            loginsMetric.TrackValue(1, "RequiresVerification");
        case SignInStatus.Failure:
            loginsMetric.TrackValue(1, "Failure");
    return result;

Enriching traces with custom parameters

Besides tracing metrics, we can also add custom parameters to enrich to trace logs. This is also something we would usually add in a log text, but like metrics this would be difficult to extract and analyze properly.

Some examples:

  • For failed login attempts: add login method, error type and attempted user name.
  • For failed payments: add payment type, error code, transaction type etc.
  • For long-running jobs: log the number of entities that were affected or ignored.

A code sample:

private readonly TelemetryClient _telemetryClient = new TelemetryClient();
public void Execute()
    _telemetryClient.TrackTrace("Starting long-running job.", SeverityLevel.Information);
    int count = 100;
    if (count == 0)
        _telemetryClient.TrackTrace("Nothing to update. Skipping long-running job.");
    // Do something that takes a long time.
        "Done updating a lot of entities.",
        new Dictionary<string, string>(1)
            {"EntityCount", count.ToString()}
    _telemetryClient.TrackTrace("Finished long-running job.", SeverityLevel.Information);


Implementing this cloud log tracing really is as easy as I demonstrated in this blog post.

Adding and logging enriched events and traces to Application Insights, makes it much easier to actively use and monitor the logs.

And by continuously monitoring the site on rich and quantitative metrics, we can quickly act when things go out of the normal state. And if an issue occurs it is also much easier to gather evidence or get an understanding of the cause, the extent and the user journey.