Using Windows Subsystem for Linux for Kubernetes

Klaus Hofrichter
6 min readDec 18, 2021

WSL on Windows 10 and 11 lets you develop Docker-based applications such as K3D without dedicating a Linux machine to it. It works surprisingly well, give it a try.

Photo by Tadas Sar on Unsplash

My series of articles about Kubernetes always included this line: “The setup was tested Ubuntu 21.04”, implying that you should go out and get your own Linux machine, or at least a UNIX type of system. That’s a huge limiting factor — not everyone has the time and resources to do that. But Windows Subsystem for Linux for Windows 10 and 11 is around for a little while now, and this makes getting on board with Linux really easy. So we are now “porting” the most recent Kubernetes Learning environment based on K3D to WSL.

The environment that we are porting invites you to modify it to do your own tests. Out of the box, it includes the following:

The idea is that you get a reasonably complete local setup where you can do your own testing and experiments, specifically around metrics and logging. This current article covers nothing substantially new other than running it all using WSL. There are some chart version updates, and minor fixes compared to previous releases. It still works on “pure” Linux.

Update February 13, 2022: There is a newer version of the software here with several updates.

What you need to bring

A recent Windows 10 system will do… every other component to be installed is described here. A basic understanding of bash, Docker, Kubernetes, Grafana, and NodeJS is expected. You will need a Slack account for receiving alert messages, and a Grafana Cloud account for the full setup.

All source code is available at Github via https://github.com/klaushofrichter/wsl-k3d. The source code includes bash scripts and NodeJS code. As always, you should inspect code like this before you execute it on your machine, to make sure that no bad things happen.

This was tested on a 2012 Windows 10 machine with 8 GB RAM / i7–3770 — it works reasonably well, but not great — swapping happens, no more fast switching between applications when everything is running. More memory is a good thing: 12 MB is better, 16 MB is more than enough for the setup we have here.

Windows Side Setup

There are two things that are needed: WSL itself (we need “version 2”) and Docker Desktop.

WSL-2

WSL is pre-installed on recent Windows releases. See the documentation from Microsoft here. It comes down to opening PowerShell or Windows Terminal and typing this:

wsl --install -d Ubuntu-20.04

If you do this for the first time, will request “elevation”, which means you need to open the PowerShell or Terminal as Administrator. Once you have an elevated level, do the same call again, and some installation starts. You will need to restart the PC, and some more installation will happen. Eventually, you can select a username and password. An extra window will open with bash running in it, you can close it after setting the user/password.

Once that setup is done, the command does not require Administrator access anymore, and the launch will be much faster without a reboot.

Docker Desktop

For Kubernetes, we are using K3D, which is based on Docker, so you will need to set up Docker Desktop. Docker Desktop changed its terms of use recently, but for individuals, it remains available.

Optional: VS Code, Windows Terminal, etc

There are other packages that let you do more on the Windows side, for example, VS Code could run on the Windows-side, and the already mentioned Windows Terminal, which is IMHO easier to use than PowerShell. But none of this is needed at this time.

Linux Side Setup

Our project relies on a few development tools, such as NodeJS, K3D, helm3, jq. We install all that on the Linux side with a script setup.sh in the repository for this article. Before going there, open a Windows Powershell or Terminal and check out your WSL:

C:\> wsl --list
Windows Subsystem for Linux Distributions:
Ubuntu-20.04 (Default)
docker-desktop
docker-desktop-data

If the default is not on Ubuntu-20.04, you should change that:

C:\> wsl --set-default Ubuntu-20.04

After that, you can transition over to Ubuntu with a simple wsl command and you find yourself in bash, most likely at your Windows Home directory mounted at /mnt/c/Users/${USER}, depending on your Windows shell current directory. That’s a good thing, as your Linux-side home directory at /home/$USER may go away if wsl is initialized. Now let's clone the repository:

$ cd /mnt/c/Users/${USER}
$ git clone https://github.com/klaushofrichter/wsl-k3d.git

This repository is based on the earlier articles, including Grafana Cloud and Slack integration — for the full feature, you will need to do some configuration explained in this article for Grafana Cloud and this article for Slack. Both features are disabled per default in this version for simplification, so if you want Grafana Cloud or Slack, you need to enable it in config.sh. You can do that later.

It is a good idea to check out the scripts before running them… to make sure that there is nothing bad happening.

We now need to install K3D, NodeJS, helm, and jq (a JSON parser). Call ./setup.sh to do that.

$ cd wsl-k3d
$ ./setup.sh

This will ask for the root password, which is the password you selected for the user in the beginning.

The setup.sh script uses nvm (Node Version Manager), which needs to be in the $PATH after the script ran, either by restarting this terminal or by executing these commands:

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"

Once setup.sh is done, you could either enable the external services in config.sh and do the configuration work needed, or get going without them by just launching start.sh:

./start.sh

This should end up after 10 minutes or so with a list of URLs that you could visit. If things go well, you should see a list of things that are now accessible in the local K3D cluster:

==== ./start.sh: Various information
export KUBECONFIG=/home/${USER}/.k3d/kubeconfig-mycluster.yaml
Lens metrics setting: monitoring/prom-kube-prometheus-stack-prometheus:9090/prom
myapp info API: http://localhost:8080/service/info
myapp random API: http://localhost:8080/service/random
myapp metrics API: http://localhost:8080/service/metrics
influxdb ui: http://localhost:31080
prometheus: http://localhost:8080/prom/targets
grafana: http://localhost:8080 (use admin/operator to login)
alertmanager: http://localhost:8080/alert

localhost from the Linux side is routing from the Windows side, so you can use your Windows browser directly with these URLs.

If you want to use Lens, you can install it on Windows, and copy the KUBECONFIG file content (Linux side: ~/.k3d/kubeconfig-mycluster.yaml) and paste it to the Lens application using the File/Add Cluster menu. You can also access the KUBECONFIG file through Windows Explorer at \\wsl$\Ubuntu-20.04\home. Then follow the article to set up the metrics display in Lens.

K3D is running in Docker. Therefore, you can see the K3D containers on the Windows side in Docker Desktop:

Docker Desktop on Windows shows K3D containers

Where to go from here?

WSL provides a pretty simple way for developers to somehow live in two worlds. This certainly makes Linux more accessible to many. In terms of performance, more memory is a good thing, but for simple development tasks and testing something, WSL seems a great tool for those who don’t have an extra Linux machine at hand.

Looking at the performance, the existing system has everything local, and we actually disabled some external services like Grafana Cloud and Slack for simplicity. One can also go the other way: use more services like Grafana Cloud with reasonable “free” tiers to make the local system lighter by removing local instances of Grafana and Prometheus.

Also, check out the Basic Commands Overview about WSL. You can remove the Linux Subsystem with this:

c:\> wsl --unregister Ubuntu-20.04

But note that K3D runs in Docker, so even after removing the Linux Subsystem, the Cluster is still up. You could terminate the containers through Docker Desktop if you want to clean up.

--

--