ingress-nginx Metrics on Grafana/K3D

Klaus Hofrichter
6 min readNov 7, 2021

--

This article covers an extension to an earlier article about setting up a K3D Kubernetes development environment with Grafana, Prometheus, and Alertmanager. We enable metrics for ingress-nginx and show a way to have all the Grafana Dashboards use your timezone.

The aim of the article is to give you a place where you can do your own exploration — it creates a complete Kubernetes environment that is safe and free to work with. We reuse what we did before when the focus was on Alertmanager and Lens, and add the following:

  • Metrics for ingress-nginx and associated Grafana dashboards
  • A procedure to make all dashboards use your timezone

As always, extra thanks to those who provide the great resources making all of this happen, including the many contributors behind kube-prometheus-stack, and ingress-nginx.

Update February 13, 2022: There is a newer version of the software here with several updates.

What you need to bring

The setup is based on Docker 20.10, NodeJS 14.17, Helm3, and K3D 5.0.1 (note that K3D version 5 introduced some changes in the configuration syntax). Before getting started, you need to have the components mentioned above installed by following their standard installation procedures. The article is based on Ubuntu 21.04 as OS, but other OSs will likely work as well. A basic understanding of bash, Docker, Kubernetes, Grafana, and NodeJS is expected. You will need a Slack account for receiving alert messages.

All source code is available at Github via https://github.com/klaushofrichter/grafana-dashboards. The source code includes a few bash scripts and NodeJS code. As always, you should inspect code like this before you execute it on your machine, to make sure that no bad things happen.

TL;DR

The quickstart is this:

  • clone the repository and inspect the scripts (because you don’t want to run things like this without knowing what is happening).
  • Edit config.sh per instructions in the earlier article or following the comments in the config file itself.
  • ./start.sh
  • Checkout the dashboards available at localhost:8080 by logging in, going to “manage” and noticing the newingress-nginx Dashboards. Also, note that all Dashboards show your browser timezone instead of the default UTC.

Setting up metrics for ingress-nginx

In order to get some metrics in Grafana to show up, a few things need to happen:

  • ingress-nginx needs to expose metrics
  • Prometheus needs to discover the source of the metric and scrape it
  • Grafana needs a dashboard with queries that make sense of the metrics

kube-prometheus-stack comes with a set of preloaded dashboards for some common Kubernetes components. But in our installation, we added ingress-nginx, which has no preloaded dashboard in the stack, Prometheus does not know about it, and ingress-nginx does not offer metrics per default. So let’s go to work:

Expose Metrics in ingress-nginx

ingress-nginx implements a metrics API already, and the most convenient way to make it accessible is through a values file for the helm chart. See ingress-nginx-values.yaml:

controller:
podLabels:
prom: scrape
metrics:
enabled: true

This enables the metrics API (metrics: enabled: true) and adds a label (prom: scrape) to ingress-nginx. Prometheus will use this label to find the ingress-nginx pod. Note that there are many ways to do the discovery, e.g. using ServiceMonitor, but the use of labels is straightforward in our case and a simple solution for less complex setups.

Discover the ingress-nginx metrics API in Prometheus

Here is what we do for Prometheus: there is a values file prom-values.yaml.template in the repository that is used to produce the values file for kube-prometheus-stack's Helm chart. Look for this section:

- job_name: ingress-nginx-pods
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: keep
regex: 10254
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- ingress-nginx
selectors:
- role: "pod"
label: "prom=scrape"

We are giving this job a name (ingress-nginx-pods) and define a scrape interval. Then we define a label filter: ingress-nginx opens several ports, and per Promentheus implementation, they would all be scraped by Prometheus. With the relabel_config we exclude all ports that are not the port where the metrics are coming from, i.e. 10254. You can see the ports of ingress-nginx with this command:

kubectl describe pod ingress-nginx -n ingress-nginx
...
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
...

Then we tell Prometheus that this is a pod in the namespace ingress-nginx. Finally, we enable the discovery of the specific instance by repeating the label that we used in ingress-nginx-values.yaml.

You can visit Prometheus target page: http://localhost:8080/prom to see the job we defined:

Prometheus scrape jobs at http://localhost:8080/prom

myapp-pods and myapp-services are the custom metrics that come from our example NodeJS application server.js. The others are jobs discovered through ServiceMonitor CRDs (Custom Resource Definition), which is another option to make metric sources discoverable.

Add a dashboard to Grafana

Finally, we need a dashboard to make sense of the metrics and visualize them. The good people at ingress-nginx offer not one but two dashboards for that purpose:

  • NGINX Ingress Controller — includes Request Volume, CPU/Memory use, and SSL Certificate Validity
  • Request Handling Performance — includes Response Time and other performance metrics

The prom.sh script (which is called by start.sh) installs these dashboards, check these lines in prom.sh:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.jsonkubectl create configmap ingress-nginx -n monitoring --from-file="nginx.json"kubectl patch configmap ingress-nginx -n monitoring -p '{"metadata":{"labels":{"grafana_dashboard":"1"}}}'

The wget call retrieves the dashboards JSON definition. The file is installed as configmap through the kubectl create call, and we add a label grafana_dashboard=1 with a kubectl patch call. The label will cause Grafana to load the dashboard. The same is done with the second dashboard, and that’s it. Grafana will pick up the change automatically, and the dashboards should show in your list of available dashboards.

Sometimes it may be needed to restart Grafana to see the new dashboards, a command to do that is this (but you should not need this):

kubectl rollout restart deployment.apps prom-grafana -n monitoring

Patching all Dashboards to use browser time

Pre-installed Grafana Dashboards coming from kube-promentheus-stack use UTC to show the time, and there is a good reason: if everyone and everything uses UTC, there is less confusion when trying to match metrics shown in Grafana with logs coming from elsewhere. But there is a counter-argument that people who work with visual dashboards only should not be forced to use UTC instead of their local time.

Without solving this dilemma, here is one way to convert the dashboards in our installation to use the browser time instead of UTC. If you don’t want that, delete the “offending” lines from prom.sh.

The approach is simple: retrieve the config maps that define the dashboards after installation of kube-promentheus-stack, use sed to change the timezone setting for each configmap, and write them back.

These lines in the prom.sh script do that:

MAPS=$(kubectl get configmaps -l grafana_dashboard=1 \
-n monitoring | tail -n +2 | awk '{print $1}')
for m in ${MAPS}
do
echo -n "Processing map ${m}: "
kubectl get configmap ${m} -n monitoring -o yaml | \
sed 's|"timezone": ".*"|"timezone": "browser"|g' | \
kubectl apply -f - -n monitoring
done

The first statement feeds a list of configmap names to the MAPS variable from the output of kubectl get configmaps. There is a label filter for grafana_dashboard=1 to handle only configmaps that are Grafana dashboards.

The second part extracts the configmaps one by one, applies sed to replace the existing timezone setting with "timezone": "browser" and writes it back. The use of kubectl apply in the pipeline avoids deleting and re-creating, as apply can update existing resources. This may cause some warnings related to missing annotations, but these can be ignored.

Instead of "timezone": "browser", you can also use other timezone identifiers, such as Americas/Chicago. That notation uses a slash /, which is the reason for the use of | instead of the more common / in the sed call.

start.sh

If you skipped the TL&DR section, it is now time to go back to that, clone the repository, do the customization in config.sh (possibly just the Slack Webhook), and set everything up by calling ./start.sh. It will take a few minutes…

A set of URLs to localhost and some other useful information is being displayed at the end of the start.sh run. See details about that in this or that earlier article.

Where to go from here

The setup described here should provide a nice playground for doing more with Grafana, Prometheus, and Alertmanager. We showed how to bring in metrics of 3rd party components such as ingress-nginx, on top of custom metrics where we have full control over the application code.

There is plenty more stuff to explore, for example, adding persistent volumes, making more use of custom dashboards and alerts, etc… try yourself something :-)

--

--

No responses yet