II: Building kubernetes home lab with k3d — Monitoring
22.01.2021 - AYB - Reading time ~1 Minute
Part II: Kubernetes observability with Prometheus, Loki and Grafana
Set up Helm
Helm is the package manager for Kubernetes. Something like apt
or brew
but for Kubernetes.
Run brew install helm
Linux users: follow the manual page to install for your OS
Deploy a test Nginx server using Helm
Now we’re going to deploy a single page web-server that will respond to the http://k3d.localhost
requests at the root path.
First, add the Bitnami repo and update the stuff:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/nginx
we’re using bitnami because we want only nginx server without ingress controller and other stuff which is deployed by default from the official Nginx repo
Note: documentation on Bitnami Nginx chart is available here
Output:
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Deploying Nginx
helm install test-nginx bitnami/nginx --set clusterDomain=k3d.local --set replicaCount=2 --set metrics.enabled=true --set service.type=ClusterIP
Metrics.enabled
is set to true
to be used later here. Service.type
was set because by default its setting is LoadBalancer
but we already have Traefik here so it will make the deployment generating a lot of errors and overall inoperable.
If you’ll decide later to change any parameters, the easiest way is to uninstall the chart and deploy again:
helm uninstall test-nginx
. The right way is the “Rolling update”, but we’re not discussing it here yet
Create Ingress route for our Nginx
Copy the ingressroute.yml
to a new file, e.g. ingressroute.nginx.yml
and make it look like this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: test-nginx
spec:
entryPoints:
- web
routes:
- match: Host(`k3d.local`)
kind: Rule
services:
- name: test-nginx
port: 80
Now save and apply with k apply -f ingressroute.nginx.yml
Note that we didn’t specify namespace neither in helm deployment nor ingress route deployment. So our Nginx is running at the
default
namespace. For the purposes of this tutorial it’s ok, but you should consider specifying namespaces for everything you’re doing in Kubernetes, because having everything in thedefault
one will make your life tough and painful.
Point your browser now to http://k3d.local/
and you should see the “Welcome to nginx!” page.
Setting up Prometheus and Grafana
Deploying Prometheus and Grafana to our test cluster
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack
This may take a while since your k3d cluster is pulling all the required images and they’re quite big
Output:
NAME: monitoring
LAST DEPLOYED: Sun May 29 18:31:56 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace default get pods -l "release=monitoring"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
kubectl --namespace default get pods -l "release=monitoring"
NAME READY STATUS RESTARTS AGE
monitoring-kube-state-metrics-56bfd4f44f-nmpr8 1/1 Running 0 2m44s
monitoring-prometheus-node-exporter-vf66j 1/1 Running 0 2m44s
monitoring-prometheus-node-exporter-h8qpr 1/1 Running 0 2m44s
monitoring-prometheus-node-exporter-jzxz2 1/1 Running 0 2m44s
monitoring-kube-prometheus-operator-5dbdd57558-ztd75 1/1 Running 0 2m44s
Now lets check whats up with ports:
kgs -A -l "release=monitoring"
Output:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-system monitoring-kube-prometheus-kube-controller-manager ClusterIP None <none> 10257/TCP 3m43s component=kube-controller-manager
kube-system monitoring-kube-prometheus-kube-etcd ClusterIP None <none> 2379/TCP 3m43s component=etcd
kube-system monitoring-kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 3m43s k8s-app=kube-proxy
kube-system monitoring-kube-prometheus-coredns ClusterIP None <none> 9153/TCP 3m43s k8s-app=kube-dns
kube-system monitoring-kube-prometheus-kube-scheduler ClusterIP None <none> 10251/TCP 3m43s component=kube-scheduler
default monitoring-kube-prometheus-prometheus ClusterIP 10.43.184.203 <none> 9090/TCP 3m43s app.kubernetes.io/name=prometheus,prometheus=monitoring-kube-prometheus-prometheus
default monitoring-kube-prometheus-operator ClusterIP 10.43.153.170 <none> 443/TCP 3m43s app=kube-prometheus-stack-operator,release=monitoring
default monitoring-kube-prometheus-alertmanager ClusterIP 10.43.89.13 <none> 9093/TCP 3m43s alertmanager=monitoring-kube-prometheus-alertmanager,app.kubernetes.io/name=alertmanager
default monitoring-prometheus-node-exporter ClusterIP 10.43.147.53 <none> 9100/TCP 3m43s app=prometheus-node-exporter,release=monitoring
default monitoring-kube-state-metrics ClusterIP 10.43.145.194 <none> 8080/TCP 3m43s app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=kube-state-metrics
By the way, if you have already installed Lens you should see now all the fancy metrics about your cluster in a sexy infographic form at the cluster dashboard.
Exposing Grafana to the browser
First, edit the localhost
line in the /etc/hosts
to be like:
127.0.0.1 localhost k3d.local grafana.k3d.local
Second, as we did before, copy ingressroute.yml
to a new file (e.g. to ingressroute.grafana.yml
) and make it look like that:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: monitoring-grafana
spec:
entryPoints:
- web
routes:
- match: Host(`grafana.k3d.local`)
kind: Rule
services:
- name: monitoring-grafana
port: 80
Now k apply -f ingressroute.grafana.yml
From this point you should be able to open Grafana interface in your browser using the http://grafana.k3d.local
address.
For this stack Grafana default credentials are:
User: admin
Password: prom-operator
At the dashboards browser you can look at the various pre-installed Kubernetes Observability Dashboards.
Customizing Grafana dashboards is another topic and will not be discussed here.
Previous: Cluster deployment
Next: Service mesh and Postgres