MeidokonWiki:

persica cluster

This is a cluster of three identical nodes, named persica1/2/3

I last touched this in April 2023 and it was very annoying to get as far as I did. Next time I look at it, I think I will rebuild the cluster from scratch again, and use a different guide. Something with actual explanations and a few opinions, like this one: https://github.com/hobby-kube/guide

Another rebuild attempt in late 2023

A few changes for this one:

Prepare asval controller node

OS imaging

Using the Raspberry Pi Imager app, start with RPi OS Lite 64-bit, suitable for the RPi 3B+

It lets you make some customisations before flashing, which is really nice:

Config

TBC

TFTP server

TBC

k8s notes

Build notes

Per node

This was useful for figuring out the TFTP stuff for the first time: https://askubuntu.com/questions/1183487/grub2-efi-boot-via-pxe-load-config-file-automatically

Paths are hardcoded into the grubx64.efi binary, meaning HDD and PXE versions aren't the same. Make sure you put all the grub stuff in a grub/ directory. Check the $prefix to see where it's searching:

UEFI settings

Get to the UEFI

Record details

Change settings

Reboot and go back in again.

Ansible management after kickstart build

This is getting everything to the state where I can bootstrap the cluster. I should ansible'ise everything, making minimal assumptions about the kickstart part of the process.

I'm keeping a simple ansible repo in ~/git/persica-ansible/

I have a basic set of roles to get the nodes into a workable state, right before I invoke kubeadm for the first time.

---
- name: Configure persica k8s cluster
  hosts: persica
  roles:
    - role: common
      tags: common
    - role: docker_for_kube
      tags: docker_for_kube
    - role: kube_daemons
      tags: kube_daemons

Initialise the control plane

This is manual of course, no ansible here.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node

  1. This will be a single-node control plane, but we should specify --control-plane-endpoint anyway. persica1 is going to be our control plane.

  2. Our Pod network add-on will be Flannel. We can specify --pod-network-cidr but I'll try without first.

  3. It'll detect containerd
  4. The default --apiserver-advertise-address will be fine, let it autodetect

I added a custom CNAME record to local pihole (calico) and Gandi (public service), for persica-endpoint => persica1. Unlike the DHCP stuff, this is in the general DNS web interface, not a custom config file.

After a bunch of faffing around to fix up the firewall config, bridge filtering kernel module, and enabling ipv4 forwarding, the init begins after passing preflight checks.

[root@persica1 ~]# kubeadm init --control-plane-endpoint=persica-endpoint
[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0415 03:43:19.958609   39430 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
W0415 03:43:52.646765   39430 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local persica-endpoint persica1] and IPs [10.96.0.1 192.168.1.31]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost persica1] and IPs [192.168.1.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost persica1] and IPs [192.168.1.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0415 03:44:21.781505   39430 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

No worky :/

https://serverfault.com/questions/1116281/kubeadm-1-25-init-failed-on-debian-11-with-containerd-connection-refused

Maybe I need the control plane on a separate node after all. I'll try illustrious.

Now try kubeadm again.


Oh sonovabitch! Config not well described: https://github.com/containerd/containerd/issues/6964

Fixed config /etc/containerd/config.toml:

version = 2

disabled_plugins = []

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    base_runtime_spec = ""
    cni_conf_dir = ""
    cni_max_conf_num = 0
    container_annotations = []
    pod_annotations = []
    privileged_without_host_devices = false
    runtime_engine = ""
    runtime_path = ""
    runtime_root = ""
    runtime_type = "io.containerd.runc.v2"

  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    BinaryName = ""
    CriuImagePath = ""
    CriuPath = ""
    CriuWorkPath = ""
    IoGid = 0
    IoUid = 0
    NoNewKeyring = false
    NoPivotRoot = false
    Root = ""
    ShimCgroup = ""
    SystemdCgroup = true

# They suggest pinning this image, so we'll do that. This is the out-of-box default.
# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#override-pause-image-containerd
[plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "registry.k8s.io/pause:3.9"

We could/should be using kubeadm init with a configuration file: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/

Apr 15 04:48:26 illustrious.thighhighs.top systemd[1]: Started kubelet: The Kubernetes Node Agent.
Apr 15 04:48:26 illustrious.thighhighs.top kubelet[12354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 04:48:26 illustrious.thighhighs.top kubelet[12354]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.

But screw that. Because guess what, it's also poorly documented!

Initialising the control plane now actually works

kubeadm init --control-plane-endpoint=persica-endpoint

Setup my `~/.kube/` config stuff as directed. Apparently this is an uber-superuser, so I shouldn't be using it regularly. Oh.

cat <<EOF > kubeconfig_example.yml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# Will be used as the target "cluster" in the kubeconfig
clusterName: "persica"
# Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig
controlPlaneEndpoint: "persica-endpoint.thighhighs.top:6443"
# The cluster CA key and certificate will be loaded from this local directory
certificatesDir: "/etc/kubernetes/pki"
EOF

# on illustrious
kubeadm kubeconfig user --config kubeconfig_example.yml --client-name furinkan --validity-period 8760h

Now try adding a pod network. We'll use Flannel, and find the docs ourselves: https://github.com/flannel-io/flannel#deploying-flannel-manually

# from suomi
kubectl --context=persica-admin apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

kubectl --context=persica-admin get pods --all-namespaces
NAMESPACE      NAME                                                 READY   STATUS              RESTARTS      AGE
kube-flannel   kube-flannel-ds-zr6fb                                0/1     CrashLoopBackOff    1 (16s ago)   34s
kube-system    coredns-5d78c9869d-mp7p9                             0/1     ContainerCreating   0             66m
kube-system    coredns-5d78c9869d-tlsc6                             0/1     ContainerCreating   0             66m
kube-system    etcd-illustrious.thighhighs.top                      1/1     Running             1             66m
kube-system    kube-apiserver-illustrious.thighhighs.top            1/1     Running             1             66m
kube-system    kube-controller-manager-illustrious.thighhighs.top   1/1     Running             1             66m
kube-system    kube-proxy-5mntm                                     1/1     Running             0             66m
kube-system    kube-scheduler-illustrious.thighhighs.top            1/1     Running             1             66m

Doesn't work because we don't have the same podCIDR, and the default isn't compatible with whatever kubeadm does? FFS!

https://devops.stackexchange.com/questions/5898/how-to-get-kubernetes-pod-network-cidr

Okay so I can either nuke the cluster and reinstantiate it with podCIDR, or just reinstall the network plugin or something. Let's try the latter.

Yeah.

Fukkit try again

# on illustrious
kubeadm reset
rm -rf /etc/cni/net.d/
rm -rf ~/.kube/

# fix the init: https://github.com/flannel-io/flannel/issues/728#issuecomment-308878912
kubeadm init --control-plane-endpoint=persica-endpoint.thighhighs.top --pod-network-cidr=10.244.0.0/16

# Fix up my kubectl creds again

# install flannel again
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# is it working now?
kubectl get pods --all-namespaces

# IT FUCKING WORKS!!

Now we join some worker nodes to the cluster, finally.

# on persica1
kubeadm join persica-endpoint.thighhighs.top:6443 --token FOO.FOOFOOFOO \
        --discovery-token-ca-cert-hash sha256:BARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBARBAR

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

It's joined but apparently NotReady:

root@illustrious:~# kubectl get nodes
NAME                         STATUS     ROLES           AGE    VERSION
illustrious.thighhighs.top   NotReady   control-plane   17m    v1.27.1
persica1                     NotReady   <none>          2m7s   v1.27.0

Apparently coredns won't start because of taints, as described here:

Fuck yoooooouuu, now the coredns containers are running. I probably shouldn't have jumped the gun and joined all the worker nodes... I need to kick them so they start properly.

root@illustrious:~# kubectl get pods --all-namespaces
NAMESPACE      NAME                                                 READY   STATUS              RESTARTS   AGE
kube-flannel   kube-flannel-ds-4p4wd                                0/1     Init:0/2            0          21m
kube-flannel   kube-flannel-ds-6qfrm                                0/1     Init:0/2            0          12m
kube-flannel   kube-flannel-ds-kb94w                                0/1     Init:0/2            0          12m
kube-flannel   kube-flannel-ds-vctrt                                1/1     Running             0          30m
kube-system    coredns-5d78c9869d-dqnkh                             1/1     Running             0          36m
kube-system    coredns-5d78c9869d-rbmhm                             1/1     Running             0          36m
kube-system    etcd-illustrious.thighhighs.top                      1/1     Running             2          36m
kube-system    kube-apiserver-illustrious.thighhighs.top            1/1     Running             2          36m
kube-system    kube-controller-manager-illustrious.thighhighs.top   1/1     Running             0          36m
kube-system    kube-proxy-8dl56                                     0/1     ContainerCreating   0          12m
kube-system    kube-proxy-dppxt                                     0/1     ContainerCreating   0          21m
kube-system    kube-proxy-ljk6c                                     1/1     Running             0          36m
kube-system    kube-proxy-t7gcn                                     0/1     ContainerCreating   0          12m
kube-system    kube-scheduler-illustrious.thighhighs.top            1/1     Running             2          36m

Try deleting and re-adding a node. From https://stackoverflow.com/a/54220808/806927

# on illustrious
kubectl get nodes
kubectl drain persica1
kubectl drain persica1 --ignore-daemonsets --delete-local-data
kubectl delete node persica1

# on persica1
kubeadm reset

then join again

Looks like the kube-proxy is having trouble starting on persica1. And while it's only a warning, I bet it's more significant than that.

root@illustrious:~# kubectl get pods --all-namespaces
NAMESPACE      NAME                                                 READY   STATUS              RESTARTS   AGE
kube-flannel   kube-flannel-ds-gjq5h                                0/1     Init:0/2            0          3m33s
kube-flannel   kube-flannel-ds-vctrt                                1/1     Running             0          41m
kube-system    coredns-5d78c9869d-dqnkh                             1/1     Running             0          47m
kube-system    coredns-5d78c9869d-rbmhm                             1/1     Running             0          47m
kube-system    etcd-illustrious.thighhighs.top                      1/1     Running             2          47m
kube-system    kube-apiserver-illustrious.thighhighs.top            1/1     Running             2          47m
kube-system    kube-controller-manager-illustrious.thighhighs.top   1/1     Running             0          47m
kube-system    kube-proxy-ljk6c                                     1/1     Running             0          47m
kube-system    kube-proxy-xpv58                                     0/1     ContainerCreating   0          3m33s
kube-system    kube-scheduler-illustrious.thighhighs.top            1/1     Running             2          47m

root@illustrious:~# kubectl get events --namespace=kube-system | grep pod/kube-proxy-xpv58
4m29s       Normal    Scheduled                pod/kube-proxy-xpv58   Successfully assigned kube-system/kube-proxy-xpv58 to persica1
9s          Warning   FailedCreatePodSandBox   pod/kube-proxy-xpv58   Failed to create pod sandbox: open /run/systemd/resolve/resolv.conf: no such file or directory


# on persica1
mkdir /run/systemd/resolve
ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf

wtf now there's another error:

root@illustrious:~# kubectl get events --namespace=kube-system | grep pod/kube-proxy-grqhf
20s         Normal    Scheduled                pod/kube-proxy-grqhf   Successfully assigned kube-system/kube-proxy-grqhf to persica1
6s          Warning   FailedCreatePodSandBox   pod/kube-proxy-grqhf   Failed to create pod sandbox: rpc error: code = InvalidArgument desc = failed to create containerd container: create container failed validation: container.Runtime.Name must be set: invalid argument

I think I haven't deployed a good containerd config everywhere yet. Deployed that, and suddenly the damn kube-proxy and kube-flannel containers are working.

Now I can add the other two nodes, still need to fix the resolv.conf manually.

root@illustrious:~# kubectl get nodes -o wide
NAME                         STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                CONTAINER-RUNTIME
illustrious.thighhighs.top   Ready    control-plane   78m     v1.27.1   192.168.1.12   <none>        Ubuntu 22.04.2 LTS          5.15.0-69-generic             containerd://1.6.20
persica1                     Ready    <none>          21m     v1.27.0   192.168.1.31   <none>        AlmaLinux 9.1 (Lime Lynx)   5.14.0-162.6.1.el9_1.x86_64   containerd://1.6.20
persica2                     Ready    <none>          2m41s   v1.27.0   192.168.1.32   <none>        AlmaLinux 9.1 (Lime Lynx)   5.14.0-162.6.1.el9_1.x86_64   containerd://1.6.20
persica3                     Ready    <none>          33s     v1.27.0   192.168.1.33   <none>        AlmaLinux 9.1 (Lime Lynx)   5.14.0-162.6.1.el9_1.x86_64   containerd://1.6.20

Good enough for now!

Making ingress work

I don't understand this well enough, but I want to use ingress-nginx. Here's a page about it, albeit not using raw kubectl: https://kubernetes.github.io/ingress-nginx/kubectl-plugin/

Maybe this one too: https://medium.com/tektutor/using-nginx-ingress-controller-in-kubernetes-bare-metal-setup-890eb4e7772

Making load balancing work

I thought I wouldn't need it, but it looks like I do, if I want sensible useful functionality. Here's an explanation of why I want to use Metal LB, and it's not just for BGP-based configs: https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md

I'll use it in L2 mode with ARP/NDP I think. Just need to dedicate a bunch of IPs to it so it can manage the traffic to them.

MeidokonWiki: servers/persica (last edited 2023-11-08 06:22:39 by furinkan)