Getting Started With Prometheus Workshop: Service Discovery
Interested in open-source observability? Learn about automating service discovery and how to scale your observability in dynamic cloud native environments.
Join the DZone community and get the full member experience.
Join For FreeAre you interested in open-source observability but lack the knowledge to just dive right in?
This workshop is for you, designed to expand your knowledge and understanding of open-source observability tooling that is available to you today.
Dive right into a free, online, self-paced, hands-on workshop introducing you to Prometheus. Prometheus is an open-source systems monitoring and alerting tool kit that enables you to hit the ground running with discovering, collecting, and querying your observability today. Over the course of this workshop, you will learn what Prometheus is, what it is not, install it, start collecting metrics, and learn all the things you need to know to become effective at running Prometheus in your observability stack.
Previously, I shared an introduction to Prometheus, installing Prometheus, an introduction to the query language, exploring basic queries, using advanced queries, and relabeling metrics in Prometheus as free online labs. In this article, you'll learn all about discovering service targets in Prometheus.
Your learning path takes you into the wonderful world of service discovery in Prometheus, where you explore the more realistic and dynamic world of cloud-native services that automatically scale up and down. Note this article is only a short summary, so please see the complete lab found online to work through it in its entirety yourself:
The following is a short overview of what is in this specific lab of the workshop. Each lab starts with a goal. In this case, it is as follows:
This lab provides an understanding of how service discovery is used in Prometheus for locating and scraping targets for metrics collection. You're learning by setting up a service discovery mechanism to dynamically maintain a list of scraping targets.
You start in this lab exploring the service discovery architecture Prometheus provides and how it is supporting all manner of automated discovery of dynamically scaling targets in your infrastructure, the basic definitions of what service discovery needs to achieve, knowing what targets should exist, knowing how to pull metrics from those targets, and how to use the associated target metadata. You then dive into the two options for installing the lab demo environments, either using source projects or in open-source containers for the exercises later in this lab.
The Demo Environment
Whether you install it using source projects or containers, you'll be setting up the following architecture to support your service discovery exercises using the services demo to ensure your local infrastructure contains the following:
- Production 1 running at http://localhost:11111
- Production 2 running at http://localhost:22222
- Development running at http://localhost:44444
Note that if you have any port conflicts on your machine, you can map any free port numbers you like, making this exercise very flexible across your available machines.
Next, you'll be setting up a file-based discovery integration with Prometheus that allows your applications and pipelines to modify a file for dynamic targeting of the infrastructure you want to scrape. This file (targets.yml) in our exercise will look something like this if you are targeting the above infrastructure:
- targets: - "localhost:11111" - "localhost:22222" labels: job: "services" env: "production" - targets: - "localhost:44444" labels: job: "services" env: "development"
Configuring your Prometheus instance requires a new file-based discovery section in your workshop-prometheus.yml file:
# workshop config global: scrape_interval: 5s scrape_configs: # Scraping Prometheus. - job_name: "prometheus" static_configs: - targets: ["localhost:9090"] # File based discovery. - job_name: "file-sd-workshop" file_sd_configs: - files: - "targets.yml"
After saving your configuration and starting your Prometheus instance, you are then shown how to verify that the target infrastructure is now being scraped:
Next up, you'll start adding dynamic changes to your target file and see that they are automatically discovered by Prometheus without having to restart your instance.
Exploring Dynamic Discovery
The rest of the lab walks through multiple exercises where you make dynamic changes and verify that Prometheus is able to automatically scale to the needs of your infrastructure. For example, you'll first change the infrastructure you have deployed by promoting the development environment to become the staging infrastructure for your organization.
First, you update the targets file:
- targets: - "localhost:11111" - "localhost:22222" labels: job: "services" env: "production" - targets: - "localhost:44444" labels: job: "services" env: "staging"
Then you verify that the changes are picked up, this time using a PromQL query and the Prometheus console without having to restart your Prometheus instance:
Later in the lab, you are given exercises to fly solo and add a new testing environment so that the end results of your dynamically growing observability infrastructure contain production, staging, testing, and your Prometheus instance:
Missed Previous Labs?
This is one lab in the more extensive free online workshop. Feel free to start from the very beginning of this workshop here if you missed anything previously:
You can always proceed at your own pace and return any time you like as you work your way through this workshop. Just stop and later restart Perses to pick up where you left off.
Coming Up Next
I'll be taking you through the following lab in this workshop where you'll learn all about instrumenting your applications for collecting Prometheus metrics.
Stay tuned for more hands-on material to help you with your cloud-native observability journey.
Published at DZone with permission of Eric D. Schabell, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments