I: Building kubernetes home lab with k3d — Cluster deployment
07.12.2020 - AYB - Reading time ~6 Minutes
Part I: Kubernetes homelab with k3d
Let’s assume that you’re a developer or junior DevOps engineer who got the task “This thing must run in Kubernetes tomorrow” but you have no clue what the hell Kubernetes is and have to learn it ASAP to have your job the day after tomorrow.
*Disclaimer: Kubernetes is a multilayered abstraction realm for running various stuff isolated in sandboxes and make all this stuff communicate and collaborate at the same time. User/admin has some control over the Kubernetes to ask it do things in a desired way. The deeper you’re getting into it, the more scary it becomes, but it’s ok - everyone feels himself completely pissed off by Kubernetes at least 9100 times while learning it.*
Getting your first Kubernetes cluster running
Prerequisites
- Linux/MacOS
- Docker
This guide may be used on Windows, but it will require some changes to the operations/command that will NOT be highlighted here.
Deploying the lightweight dockerized k8s cluster using k3d
There are several methods how to install k3d on your computer and by default it’s offered to use crazy popular method like
wget https://something/install.sh | bash
. But it’s insanely stupid to use that method for many quite obvious reasons. If you’re not aiming to become a somewhat good DevOps, then just remember that this method fucks up all your security unless you are the author of the script.
Go to the Releases page and download the appropriate release for your OS/arch. Then chmod +x
on the binary and put it somewhere in your $PATH
. Or put somewhere and add this path to $PATH
.
Whatsoever installation method you choose, lets start.
Deploying a control node with two workers:
k3d cluster create kube -p "80:80@loadbalancer" -p "443:443@loadbalancer" --agents 2
Note: With the parameters
-p "80:80@loadbalancer"
and-p "443:443@loadbalancer"
we’re tellind k3d to occupy local ports 80 and 443 and route traffic to the cluster loadbalancer. These parameters are vitally important because otherwise you’d be unable to reach your applications in the cluster over the network. If you’re planning to have any services on other ports — just add the corresponding ports to this command.
Output
INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] portmapping '443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-kube'
INFO[0000] Created image volume k3d-kube-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-kube-tools'
INFO[0001] Creating node 'k3d-kube-server-0'
INFO[0001] Creating node 'k3d-kube-agent-0'
INFO[0001] Creating node 'k3d-kube-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-kube-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] Starting new tools node...
INFO[0001] Starting Node 'k3d-kube-tools'
INFO[0002] Starting cluster 'kube'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-kube-server-0'
INFO[0005] Starting agents...
INFO[0006] Starting Node 'k3d-kube-agent-1'
INFO[0006] Starting Node 'k3d-kube-agent-0'
INFO[0017] Starting helpers...
INFO[0018] Starting Node 'k3d-kube-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 5 network members into CoreDNS configmap...
INFO[0026] Cluster 'kube' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info
$kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-kube-server-0 Ready control-plane,master 44s v1.22.7+k3s1
k3d-kube-agent-0 Ready <none> 35s v1.22.7+k3s1
k3d-kube-agent-1 Ready <none> 35s v1.22.7+k3s1
Here we see that we got control plane and master node. But we need a space to deploy stuff. Lets go for worker nodes; apparently we’ll create two of them.
Check it
$k3d node list
NAME ROLE CLUSTER STATUS
k3d-kube-agent-0 agent kube running
k3d-kube-agent-1 agent kube running
k3d-kube-server-0 server kube running
k3d-kube-serverlb loadbalancer kube running
k3d-kube-tools kube running
Check docker containers status
$docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2cdc91188d7 ghcr.io/k3d-io/k3d-tools:5.4.1 "/app/k3d-tools noop" 3 minutes ago Up 3 minutes k3d-kube-tools
f3c3cf1da08a ghcr.io/k3d-io/k3d-proxy:5.4.1 "/bin/sh -c nginx-pr…" 3 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:51772->6443/tcp k3d-kube-serverlb
1bca176b8b44 rancher/k3s:v1.22.7-k3s1 "/bin/k3d-entrypoint…" 3 minutes ago Up 3 minutes k3d-kube-agent-1
a57669c52085 rancher/k3s:v1.22.7-k3s1 "/bin/k3d-entrypoint…" 3 minutes ago Up 3 minutes k3d-kube-agent-0
94d7133d3d5c rancher/k3s:v1.22.7-k3s1 "/bin/k3d-entrypoint…" 3 minutes ago Up 3 minutes k3d-kube-server-0
Everything seems fine.
Tricking DNS
Now we need to make some trick with DNS to avoid future issues with addressing to localhost
— edit the /etc/hosts
file.
Run sudo vim /etc/hosts
and at the end of line with 127.0.0.1 localhost
add k3d.local
It should become 127.0.0.1 localhost k3d.local
Deploying our first service
Disclaimer: I got sick at some moment typing all these
kubectl get ...
so I made some aliases listed below and added them to my~/.zshrc
.alias k=kubectl alias kg="kubectl get" alias kgp="kubectl get pods -o wide" alias kgpa="kubectl get pods --all-namespaces" alias kgs="kubectl get services -o wide" alias kgsa="kubectl get services --all-namespaces -o wide" alias kgi="kubectl get ingresses" alias kgia="kubectl get ingresses --all-namespaces" alias kd="kubectl describe" alias kdp="kubectl describe pod" alias kds="kubectl describe service" alias kdd="kubectl describe deployment" alias kdi="kubectl describe ingress"
The most popular test service is «Whoami» by Traefik labs. At the GitHub there is a very nice deployment example for that service for Kubernetes.
git clone https://github.com/pacroy/whoami && cd whoami
Now change whoami.yml
in line 6 to have replicas: 2
Now deploy the service
kubectl create ns whoami
kubectl apply -f whoami.yml -n whoami
Output:
namespace/whoami created
deployment.apps/whoami created
service/whoami created
Checking out the result:
$kgp -n whoami
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-75d5976d8d-k8jfq 1/1 Running 0 13s 10.42.1.3 k3d-kube-node-0 <none> <none>
whoami-75d5976d8d-t92cj 1/1 Running 0 13s 10.42.2.3 k3d-kube-node-1 <none> <none>
Now edit the ingressroute.yml
, line 9 to look like
- match: Host(`k3d.local`) && PathPrefix(`/whoami`)
and apply the route:
kubectl apply -f ingressroute.yml -n whoami
Lets test our very new test deployment by opening http://k3d.local/whoami
at the browser. The output should be very alike the following:
Hostname: whoami-75d5976d8d-zz4bf
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.5
IP: fe80::dc98:ffff:fea4:c55
RemoteAddr: 10.42.2.4:55158
GET /whoami HTTP/1.1
Host: k3d.local
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-CA,en-US;q=0.7,en;q=0.3
Dnt: 1
Sec-Gpc: 1
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.2.1
X-Forwarded-Host: k3d.local
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-56c4b88c4b-qqqsx
X-Real-Ip: 10.42.2.1
And since we’re having two replicas of the whoami
pod, we can refresh the page and we’ll see how responding IP address and Port are changing which means whe’re successfully reaching both of our pods with the service. Hooraaah!
Now you have a fully functional Kubernetes cluster on your local machine.
Some useful notes
Install OpenLens — this is an awesome tool to manage your clusters and environments. Can’t get overexcited about it.
This cluster has Traefik ingress controller deployed by default. Traefik is a lightweight yet powerful router/proxy for Kubernetes, though it has a quite steep learning path. https://docs.traefik.io/ is your best friend with that.