Azure Monitor and Azure Log Analytics: When to Use Which
Monitoring your resources is vital to being able to detect issues or opportunities for performance improvements. When it comes to Azure the monitoring story ...
Join the DZone community and get the full member experience.
Join For FreeMonitoring your resources is vital to being able to detect issues or opportunities for performance improvements. When it comes to Azure, the monitoring story can be a bit confusing, with multiple different services seeming to offer similar or related solutions. In particular, there is often confusion between two services: Azure Monitor and Log Analytics (part of the OMS suite). We're going to take a look at these two services and when you would use them.
Service Descriptions
Let's start by taking a look at what these services actually do.
Azure Monitor
Azure Monitor has been around for about a year and a half. Before this existed, every service implemented (or failed to implement) their own method of capturing and displaying metrics. Some services were better at this than others and there was a very inconsistent approach. Azure Monitor was created as a means to provide a consistent way for resources (both IaaS and PaaS) to collect metrics and provide access to them.
Log Analytics
Log Analytics has been around (in some different forms) for quite a while, and at its core, it is a log aggregation tool. Log analytics will collect and store your data from various log sources and allow you to query over them using a custom query language.
Where confusion has arisen in the past, especially before Azure Monitor existed, was that log analytics and the OMS suite, in general, were used as the primary source of both the collection of metric data as well as alerting. It became a de facto monitoring solution, as well as log aggregation. Things like the VM agent that can collect Perfmon data, collect event logs, and ingest logs directly from some PaaS services in some of the pre-built "solutions" blurred the lines between monitoring and aggregation.
Recommended Approach
So given the confusion mentioned above, which of these should we be using and how should we use them? This is really going to depend on your requirements for monitoring and alerting and the scale of the Azure estate you want to monitor.
Azure monitor on its own provides a great solution if you are looking for either point-in-time or short-time scale metrics for a single resource. If you're having an issue with a web app and you want to go and look at its performance metrics, you can do this through Azure Monitor using the portal and see some great charts about what is happening now.
I can pin this chart to my Azure Dashboard if I want. I can also use this data to create alerts on a specific resource using the Alerts feature in the portal. If I'm debugging a specific issue, or I have a small amount of resources that I need to look after, then this is great — Azure Monitor will do exactly what I need.
Where this falls down is where you want to be able to monitor multiple resources. If you want to look across your estate of 100 web apps and determine which is using the most memory then this is going to be a very arduous task, working with each site individually. What you need is to be able to collate the data from all your sites and then filter and manipulate it. This is where Log Analytics comes in. By sending the data from each web app to Log Analytics, we can then use the query engine in Log Analytics to manipulate this data and get the information we need.
As with Azure Monitor, we can pin these charts to Azure Dashboard. We can also configure alerts, but again, we now only need one alert for multiple resources and it will trigger when one or more breaches the threshold rather than requiring an alert per resource.
By collecting this data using Log Analytics, we also gain more functionality:
- Longer term trend analysis (log analytics offers retention of up to two years).
- Combining metrics: We can query multiple different metrics and display them together to look for correlation.
- Complex queries: Log analytics has its own query language, which can be used to undertake complex queries over large data series.
- Query other data: Azure monitor is obviously focused on performance metrics, with Log Analytics you can collect any sort of log data, including custom logs.
Considerations
There are some downsides to using Log Analytics, though that should be born in mind. The primary of this is time to get the data. With Azure Monitor and the new feature of Near-Real-Time Alerts" it is possible to get an alert for a performance issue less than a minute after it occurs. With Log Analytics, because the data has to be ingested and then queried it can take some time before an alert is triggered. Officially, the SLA for data getting into Log Analytics is a ridiculous six hours; in reality, it's more like five to 15 minutes before data is available and alerts are fired, so you do need to keep this in mind. Additionally, Log Analytics can add extra cost. There is a free tier that supports up to 500MB of data ingress per day, but if you need more than this or you need to retain this data for more than a month, then there will be an extra cost on top of what you are paying for Azure Monitor.
Ingesting Data
So, hopefully, now, it is clear that Azure Monitor is the tool to get the data from the Azure resources, and Log Analytics is the tool to query that data if you want to query over multiple resources. Given that, how do we get that data into Log Analytics?
Fortunately, Azure Monitor comes with options for export its data. Nearly every resource will offer you the ability to export data to three things:
- A storage account.
- An Event Hub.
- A Log Analytics workspace.
This can be configured through the portal, underneath the Diagnostic settings tab for the resource you want to configure. In here, you configure which of the three sinks you want to send the data too and then what data you want to send. This will usually include options for both Logs and Metrics, and often, the metrics option will just be "all metrics," which, as the name suggests, sends any metrics that are available for that service. Below are the options for Azure SQL.
You can also configure these settings using PowerShell and CLI, as well as in an ARM template. This can be a really useful option if you want to ensure that resources you create are automatically configured at deployment time to send their data to log analytics. We'll cover how to do this in a future article.
Summary
Hopefully, that has cleared up what each of these two services is for and when you would use one or the other. If you think of Azure Monitor as the low-level collection tool and Log Analytics as the higher-level aggregation tool, then it is hopefully easy to decide which route you need to go down. If all you are interested in is some real-time data from individual resources, or you have a small amount of resources you want to monitor, then Azure Monitor is probably enough for what you need, but if you need to do anything more complex with this data or query across multiple resources, then Log Analytics should be considered. Bear in mind that Log Analytics is not the only aggregation tool out there — other tools like Splunk, LogStash, etc. could also be used to aggregate this data, but Log Analytics does have the benefit of being integrated into the Azure platform and easy to configure.
Published at DZone with permission of Sam Cogan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments