Kubernetes Dashboard Deployment — one more time

Klaus Hofrichter
9 min readJan 19, 2022

There are already plenty of guides and tutorials to install the classic Kubernetes Dashboard. This article offers one more of these, using Helm and ingress-nginx with routing so that there is no need for a proxy.

Photo by Arie Wubben on Unsplash

This Kubernetes Dashboard installation is part of a larger system, based on K3D. It already includes these elements:

Now we are adding the Kubernetes Dashboard via Helm chart with routing through ingress-nginx using a subpath, i.e. localhost:8080/dashboard/ — no more proxies needed. Note that some of the system features are disabled via featureflags in config.sh to simplify the first installation. This includes Grafana Cloud, Goldilocks, Slack and Keda. You would need to set related flags to “yes”, but you may want to start with a smaller setup.

Update February 13, 2022: There is a newer version of the software here with several updates.

What you need to bring, how to install everything

If you are using Windows: you can use Windows Subsystem for Linux to have a clean installation. A recent Windows 10 or 11 system will do fine, or a Ubuntu-alike Linux machine, better with 8 GB or more RAM. A basic understanding of bash, Docker, Kubernetes, Grafana, and NodeJS is expected. For the full setup you will need a Slack account for receiving alert messages, and a Grafana Cloud account, but these are optional, and disabled by default.

All source code is available at Github via https://github.com/klaushofrichter/kubernetes-dashboard. The source code includes bash scripts and NodeJS code. As always, you should inspect code like this before you execute it on your machine, to make sure that no bad things happen.

The Windows Subsystem for Linux setup process is described in a previous article. You should do the same steps for this setup, except using the newer repository. There is a new configuration option in config.sh, enabled by default:

export KUBERNETES_DASHBOARD_ENABLE="yes"  # or "no"

The KUBERNETES_DASHBOARD_ENABLE="yes" will get the Dashboard deployed. Please note that the default setup parameters are pretty insecure, you should keep this setup local to your machine, and definitely harden it before using it elsewhere. The purpose of this cluster is educational, so you should have all the power for our purpose.

A few other feature flags are per default “no” so speed up the initial setup. You can always add or remove components dynamically later.

TL;DR

The super-short instruction is this: if you are using Windows, install Docker Desktop, use WSL to create a Linux environment, clone the repository, run setup.sh and start.sh on the Linux side and that’s it. If you are on Linux, you need docker, and you can also run setup.sh for some other tools like jq, nodejs, etc.

Once the start.sh script finished, you should see quite a few URLs on the screen that you can visit. The Kubernetes Dashboard is using this URL:

http://localhost:8080/dashboard/#/workloads?namespace=_all

You should be greeted with a login-screen, where you can say “skip” and then you should see a Dashboard screen like this:

We don’t explain the various things you can do now within the dashboard, please check out here and there for more information. In the next sections, we cover how we installed it.

Deployment Script

There are two scripts in the repository, kubernetes-dashboard-deploy.sh to deploy the Kubernetes Dashboard using Helm, and kubernetes-dashbboard-undeploy.sh to remove it.

The helm install call in the deployment script includes the --version option, with that version defined in the config.sh file, look for the line KUBERNETESDASHBOARDCHART=”5.1.1". You can check for example here for newer versions.

Other than the essential helm install, the deployment script includes these sections:

  • kubectl apply -f kubernetes-dashboard.yaml: this manifest contains extra permissions for the default service account and a Service Monitor. We discuss authentication later in the text. The Service Monitor is actually supposed to be part of the Helm values file, but for some reason, it does not work in chart 5.1.1 so we have an extra specification here.
  • kubectl rollout status deployment.apps kubernetes-dashboard: This just waits until the deployment is complete. You may not really need this, but as a matter of design, in this set up all deployments are blocked until completion — this makes the overall installation slower than needed, but you can be sure that one deployment is done before the next starts and there is less risk for race conditions.
  • Token extraction: to log in, we need a token that is generated fresh for each deployment. Some bash scripting extracts that token for you to copy/paste when needed.
  • Goldilocks and resource patching: Goldilocks is a tool to estimate resource consumption, and resource patching is related to enforcing resource settings for every workload in the system. Both settings are disabled in config.sh per default, but if enabled, some settings need to be executed for each workload. Details are in this article.

The undeploy script reverses all steps from the deployment so that it is possible to clear out the Dashboard from the cluster, change settings, and deploy fresh again. This supports your experimentation. The undeploy script is called by the deployment script to have a fresh deployment every time.

Helm Chart Values

Using Helm usually brings a lot of convenience to the deployment process. You don’t need to worry too much about matching names and picking the right labels or annotations. There is a loss of flexibility compared to managing the manifests directly, but well-designed Helm charts cover the majority of typical installations and make things easier.

You can see all values that can be customized for the Kubernetes Dashboard helm chart by calling this:

$ helm show values kubernetes-dashboard/kubernetes-dashboard

You can redirect this output to a file, and then use the -f option to apply that file to the helm process. If you change values in the files, these will overwrite the defaults.

Before applying the helm chart we run it through envsubst to do some pre-processing, and then use -f - to read the envsubst output instead of the values file containing variables. In this case, the preprocessing is just to insert the cluster name into the chart. But there are other important items included in the values file, overwriting default settings.

Below we explain the overwrites that come with the repository:

extraArgs:
- --enable-insecure-login
- --enable-skip-login
protocolHttp: true

This is a set of arguments that are passed to the container executable and one setting. These are actually important there, as they relate to the security of the installation:---enable-insecure-login is set because two lines down in the values file we disable HTTPS, reducing security, but enabling better routing using ingress-nginx. Internal HTTPS is disabled here through the protocolHttp: true setting. It is very possible to support HTTPS from the outside to the ingress-nginx endpoint, but that’s not configured in this current version, so we will use http://localhost:8080 as endpoint, not https.

Even less secure is the use of --enable-skip-login. This allows you to skip the login process, i.e. you don’t need to copy/paste the token for authentication. Doing so will log you in with the default service account kubernetes-dashboard when accessing the UI. You may want to take out this line, and use the token that is shown when running start.sh or kubernetes-dashboard-deploy.sh as credential.

To make the “easy login” path complete, we will give that default service account cluster-admin rights, which is like root in Linux. So, be aware of this, and keep your system local. You can (and should) experiment more by re-enabling the security measures, e.g. come up with specific access rights for the default service account (see next section about Authentication).

resources:
requests:
cpu: 50m
memory: 100Mi
limits:
cpu: 150m
memory: 150Mi

This section declares the CPU and memory requests and limits for the dashboard container. Following another article, all workloads in this system have settings like that. The values here may need to be adjusted for your system, you can use Grafana’s Compute-Resource Dashboard for this. Please note that we are also installing a metrics scraper as part of this helm chart. The resource settings for the metrics scraper are not supported by the helm chart, so there is a kubectl patch for that in kubernetes-dashboard-deploy.sh.

service:
type: ClusterIP
externalPort: 9090
clusterServiceLabel:
enabled: false
key: "kubernetes.io/cluster-service"
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
className: nginx
paths:
- /dashboard(/|$)(.*)
hosts:
- localhost

This section causes the creation of a service for the deployment and defines and ingress for ingress-nginx. The service is needed for the ingress definition. The ingress definition includes a rewrite rule for ingress-nginx: We want the dashboard to be available under the subpath /dashboard, because we already have Grafana on the root of that port: you can get to the dashboard with http://localhost:8080/dashboard/. For some reason, the trailing / is relevant here.

metricsScraper:
enabled: true

This causes the deployment of the metrics scraper which feeds the Dashboard data. As noted earlier, resource settings for the scraper are in the deployment script.

Authentication

As mentioned above, the Helm chart includes the installation of a service account called kubernetes-dashboard. That service account is then associated with a ClusterRole when applying the YAML file kubernetes-dashboard.yaml:

$ kubectl apply -f kubernetes-dashboard.yaml

In this version, we are applying the role of cluster-admin to the default service account. This is the most powerful role there is, so we are effectively giving something like root permission to the kubernetes-dashboard. Check out the detail with kubectl:

$ kubectl describe clusterrole cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]

This is basically saying that the service account kubernetes-dashboard, having the privileges of cluster-admin, can do anything with everything. As mentioned before, it’s OK for our experimental and local environment, but should be avoided anywhere else.

The kubernetes-dashboard.yaml file contains one more specification, which is the ServiceMonitor. The ServiceMonitor is a Prometheus CRD (Custom Resource Definition), and instances of it enable Prometheus to find Services that offer metrics APIs. The Helm chart is supposed to support the creation of a ServiceMonitor, but for some reason, there is an error parsing the values file when this option is enabled. So we put this specification here, and promptly, Prometheus can find the metrics API. You, too, can find it:

$  curl localhost:8080/dashboard/metrics

This will list the output of the metrics API that is also routed through the ingress. There is actually not too much relevant information, this is mostly for debugging, as you can see memory usage details and protocol uses. So we are creating overhead by letting Prometheus scrape it and not really using the data.

Where to go from here

There you have it, another Kubernetes Dashboard tutorial. The difference to other great tutorials may be that this includes ingress-nginx integration using a subpath, and a quite simple but insecure login.

Steps to take from here include of course trying other security settings, and figuring out what you can do with the dashboard based on changes in the permissions structure.

The Kubernetes Dashboard may have lost a bit of its relevance with the availability of tools like Lens. But the Kubernetes Dashboard is integrated with the cluster itself and provides quite convenient insight to the cluster when there is no easy access to the KUBECONFIG file that most people may use for authentication with Lens. Anyway, check out Lens as well, there are some comments here.

Off-Topic: Followup on Health Endpoints

The previous article in this series discussed the use of health endpoints for the NodeJS application that comes with this installation. Towards the end of that article, we looked at the graceful shutdown of pods and verified that a pod receives a SIGTERM signal when it is supposed to shut down. However, after SIGTERM, the API still received traffic from the Loadbalancer. After a few seconds, more API calls come through. But in that period, the app still needs to function and can’t initiate its own shutdown.

There was a bit of uncertainty on how to deal with this situation. In this newer version of the NodeJS application, we do the following:

  • Start a timer when SIGTERM happens.
  • When the timer expires, set an internal isReady flag to false, and start a second timer to simulate graceful shutdown actions — in real life, you would be closing files or database connections.
  • Return 500 on API calls when isReady is false, otherwise, respond to the call.
  • Call exit() when the second timer expires or the graceful shutdown actions are completed. That should happen within 30 seconds of SIGTERM per default.

Doing this prevents any 500 messages from reaching the API clients as long as the initial timer is long enough. Sources are updated in the repository, you can checkout server.js for details.

--

--