Knative Monitoring, Logging, and Tracing Explained
Learn how to set up performance monitoring, logging, and tracing for telemetry with Knative.
Join the DZone community and get the full member experience.
Join For FreeIn this post, you will see the telemetry side of Knative and Istio for a Node.js app named Knative-node-app published on IBM Cloud in the previous post- Install Knative with Istio and deploy an app on IBM Cloud.
As per the monitoring, logging, and tracing installation documentation of Knative:
Knative Serving offers two different monitoring setups: Elasticsearch, Kibana, Prometheus and Grafana or Stackdriver, Prometheus and Grafana You can install only one of these two setups and side-by-side installation of these two are not supported.
We will stick to the Elasticsearch, Kibana, Prometheus, and Grafana stack and will also use Weavescope for in-depth visualization of containers and pods.
If you installed the serving component while setting up Knative, you should have the monitoring component already installed. To confirm Knative serving component installation, run the below command:
$ kubectl describe deploy controller --namespace knative-serving
To check the installation of the monitoring component, run the below command
kubectl get pods --namespace monitoring
If you don’t see anything running, follow the steps here to set it up.
Grafana
You can access metrics through the Grafana UI. Grafana is the visualization tool for Prometheus.
To open Grafana, enter the following command:
kubectl port-forward --namespace monitoring $(kubectl get pods --namespace monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
Note: This starts a local proxy of Grafana on port 3000. For security reasons, the Grafana UI is exposed only within the cluster.
Navigate to the Grafana UI at http://localhost:3000.
You can also check the metrics for Knative Serving- scaling, deployments, pods etc.,
The following dashboards are pre-installed with Knative Serving:
- Revision HTTP Requests: HTTP request count, latency, and size metrics per revision and per configuration
- Nodes: CPU, memory, network, and disk metrics at the node level
- Pods: CPU, memory, and network metrics at pod level
- Deployment: CPU, memory, and network metrics aggregated at the deployment level
- Istio, Mixer, and Pilot: Detailed Istio mesh, Mixer, and Pilot metrics
- Kubernetes: Dashboards giving insights into cluster health, deployments, and capacity usage
Zipkin
In order to access request traces, you use the Zipkin visualization tool.
To open the Zipkin UI, enter the following command:
kubectl proxy
This command starts a local proxy of Zipkin on port 8001. For security reasons, the Zipkin UI is exposed only within the cluster.
Navigate to the Zipkin UI and Click “Find Traces” to see the latest traces. You can search for a trace ID or look at traces of a specific application. Click on a trace to see a detailed view of a specific call.
Weavescope
While obtaining and visualizing uniform metrics, logs, and traces across microservices using Istio, I fell in love with Weavescope. So thought of playing with it to understand the processes, containers, hosts, and others involved in my application.
Scope is deployed onto a Kubernetes cluster with the command
kubectl apply -f “https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
To open Weavescope, run the command and open http://localhost:4040/
kubectl port-forward -n weave “$(kubectl get -n weave pod — selector=weave-scope-component=app -o jsonpath=’{.items..metadata.name}’)” 4040
Kibana + ElasticSearch
I tried to visualize the logs using Kibana UI (the visualization tool for Elasticsearch), but struck with the following error while configuring an index pattern — “Unable to fetch mapping. Do you have indices matching the pattern?”
As there will be a revise of “logging and monitoring” related topics as per this issue on the Knative GitHub repo. I will be revisiting logs in future for sure.
Opinions expressed by DZone contributors are their own.
Comments