* Running at home, general purpose server * Intel NUC (BOXNUC7i3BNH) * Core i3-7100U at 2.40GHz * 500GB WD Blue SN550 NVMe M.2 * 8gb DDR4 2400MHz Kingston KVR24S17S8/8 * Ubuntu 21.10 <> = users = * furinkan = services = * ssh * http/s * https://thighhighs.top/ * [[https://smokeping.thighhighs.top/smokeping/?target=AdonisAve| Smokeping]] (available on internal IP, and IPv6) = docker containers = * smokeping = build notes = On 2022-01-02, FDE on LVM, Ubuntu server 21.10 Had some problem with the curtin unpack of the base image, I changed the Ubuntu archive URL to Datamossa in AU and hey presto it worked. Guess something was corrupted, shrug. * Mostly default settings * Enable full disk encryption with LVM, default settings * Enable sshd during install == basic env == * Install your authorized_keys into `root` and `furinkan` user * Enable NOPASSWD sudo {{{ %sudo ALL=(ALL:ALL) NOPASSWD: ALL }}} * Set hostname: {{{ hostnamectl set-hostname illustrious.thighhighs.top }}} * Set timezone {{{ timedatectl set-timezone Australia/Sydney }}} * Set editor {{{ echo "export EDITOR=vim" > /etc/profile.d/editor-vim.sh }}} * Disable `HashKnownHosts` {{{ echo -e "Host *\n HashKnownHosts no" > /etc/ssh/ssh_config.d/99-global.conf }}} * Configure screen {{{ curl -o ~/.screenrc https://gist.githubusercontent.com/barneydesmond/d16c5201ed9d2280251dfca7c620bb86/raw/.screenrc }}} * Configure top by entering this cheatcode {{{ z x c b s 1.5 e 1 W q }}} * Fix locales, select en_AU.UTF-8: `dpkg-reconfigure locales` * Disable console blanking, seems this is already done by default: `cat /sys/module/kernel/parameters/consoleblank` * Already set to zero means it shouldn't blank * Disable wifi and bluetooth, we don't need them and it slows down boot {{{ systemctl disable wpa_supplicant.service --now systemctl disable bluetooth.target --now }}} * Install useful packages {{{ apt update apt install -y vim screen bash-completion lsof tcpdump netcat strace nmap less bsdmainutils tzdata whiptail netbase wget curl python-is-python3 net-tools ack jq make elinks nmap whois ethtool bind9-dnsutils apt-utils man-db plocate }}} * Do a full upgrade, index the system for `locate`, then reboot {{{ apt full-upgrade updatedb reboot }}} == Configure networking == Use netplan for this, it's convenient and easy. {{{ cd /etc/netplan/ mv 00-installer-config.yaml 00-installer-config.yaml.disabled vim 10-thighhighs.yaml network: version: 2 ethernets: eno1: critical: true dhcp-identifier: mac dhcp4: false dhcp4-overrides: use-dns: false dhcp6: true dhcp6-overrides: use-dns: false ipv6-privacy: false addresses: - "192.168.1.12/24" # :12 for the .1.12 IPv4 - "2404:e80:42e3:0:12:0:0:12/64" routes: - to: 0.0.0.0/0 via: 192.168.1.1 on-link: true nameservers: addresses: - 192.168.1.20 - 192.168.1.24 - fe80::e65f:1ff:fe1c:c6ea - fe80::ba27:ebff:fe8c:f4f8 search: - thighhighs.top }}} Try applying it with `netplan try`, see if your SSH session still works, then go ahead and reboot if it's good. == Setup clevis for automated decrypt on boot == * apt install clevis-luks * Bind the volume to the Tang servers {{{ clevis luks bind -d /dev/nvme0n1p3 sss '{"t": 1, "pins": {"tang": [{"url": "http://ocular.thighhighs.top:8888"},{"url": "http://funicular.thighhighs.top:8888"}]}}' }}} * apt install clevis-initramfs Test by rebooting. == Docker Engine for services == Run with the official docs: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Prep repo {{{ apt install ca-certificates curl gnupg lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list }}} Install packages {{{ apt update apt install docker-ce docker-ce-cli containerd.io }}} Test that it's working {{{ docker run hello-world }}} Setup log rotation in `/etc/docker/daemon.json` {{{ { "log-driver": "json-file", "log-opts": { "max-size": "50m", "max-file": "10" } } }}} === Prepare space for volumes === We'll use this later for dockerised apps. {{{ mkdir /data }}} === Enable IPv6 for containers === This doesn't happen out of the box, so we need to do it ourselves. Here's the official docs: https://docs.docker.com/config/daemon/ipv6/ {{{#!wiki caution '''The notes immediately below are if you don't have IPv6 connectivity to the internet''' If you have functioning IPv6, like with a delegated prefix, you'll need to choose a subnet inside that prefix. More notes below in the ndppd section. }}} Amend the config in `/etc/docker/daemon.json` like so, adding the IPv6 keys: {{{ { "log-driver": "json-file", "log-opts": { "max-size": "50m", "max-file": "10" }, "ipv6": true, "fixed-cidr-v6": "fd21:1268:04a5:d0c::/64" } }}} We're using a subnet with [[https://en.wikipedia.org/wiki/Unique_local_address| unique local addressing]]. I've carefully chosen a subnet that's unique, and ensured it's a ''good soize'' - the subnet needs to be at least /80 or larger so it can jam the 48-bit virtual MAC address into it at the end. Here's some code to generate a good prefix: {{{#!python import random h = bytearray(random.randbytes(5)).hex() print(f"fd{h[0:2]}:{h[2:6]}:{h[6:10]}:d0c::/64") fd21:1268:04a5:d0c::/64 }}} The `fc00::/7` space is recommended to be carved into /64 subnets for your site like so: * Use fd00/8 as the stem ('d'/1101 indicates locally-assigned addresses) * Choose 40 bits using high quality randomness as the unique site code * Then you have 16 bits to use as subnet IDs, I'm using `:d0c:` for obvious reasons. Then `systemctl restart docker`, and now it should Just Work I guess. Your docker0 bridge will get the additional /64 subnet on it, and your containers will get an autoassigned IPv6 address. For local-only purposes this should be sufficient, but you probably won't be able to get off the docker host and reach other machines on the segment. To do this you either need to NAT (eww), or get the host to respond on behalf of the bridge subnet for NDP requests (IPv6 equivalent of ARP). What I can see at this point is that packets go out fine, but can't make it back. * ping6 requests leave the container with nexthop = fd21:1268:4a5:d0c::1 * docker host (fd21:1268:4a5:d0c::1) gets the packet and forwards it out eno1 towards the target * Target receives ping request, sends reply to fd21:1268:4a5:d0c:0:242:ac11:2 * Reply is routed to the default gateway, and this is the sticking point now I think we need an NDP proxy... === NDP Proxy ndppd === Problem is as described here, as as above: https://forums.docker.com/t/ipv6-not-working/10171/7 And here's a guide mentioning using ndppd: https://medium.com/@skleeschulte/how-to-enable-ipv6-for-docker-containers-on-ubuntu-18-04-c68394a219a2 {{{ apt install ndppd cp /usr/share/doc/ndppd/ndppd.conf-dist /etc/ndppd.conf vim /etc/ndppd.conf }}} '''You need to select the correct subnet here'''. If you've got public routable address space like I do, you need to carve out a small subnet. My LAN is a /64 subnet, so I'm using a /80 chunk of that and giving it to docker. You'll need to enter the same /80 subnet in `/etc/docker/daemon.json` as mentioned above. Tweak up the rule for our subnet and interfaces: {{{ proxy eno1 { rule fd21:1268:04a5:d0c::/64 { # Either of these two would work fine, the default is auto auto iface docker0 } } }}} Restart ndppd and it should answer NDP queries now. Whew! All this just so that smokeping can reach out to IPv6 hosts outside the docker host. ==== Problems you might have ==== I couldn't get this working at first. What I determined was going on is that other machines don't even bother doing an NDP query when I try to ping anything in the docker range. I think they look at the destination, realise it couldn't possibly be in the same subnet/LAN as them, so they go via layer 3 (the router) instead of sticking layer 2. Which means we need to be in the same /64 subnet as the rest of the LAN, a sibling to the docker host but living inside the docker host. They'll be hidden behind the bridge, and ndppd will make them visible to the rest of the network. I've grabbed `2404:e80:42e3:0:d0c::/80` and that works now, other hosts will find it with NDP. The nice thing is that I could even have other docker hosts on the network and use the same config. They'll generate different virtual MACs for each container, and they'll live in the same /80 subnet. Containers on host A wouldn't be able to find containers on host B, but non-docker hosts would be able to find any docker container in that /80 subnet, because ndppd will only answer a query if it owns that container with the requested address. == Network tuning == Caddy complains that it can't set a large receive buffer for network connections: {{{ Nov 03 11:22:19 illustrious.thighhighs.top caddy[1601301]: {"level":"info","ts":1667434939.2639456,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."} }}} So let's increase it. {{{ echo -e "# Increase max recv buffer to about 2.5MiB\nnet.core.rmem_max=2500000" >> /etc/sysctl.d/20-network-recv-buffer.conf }}} = Reverse proxy for services = == Proxy software == Let's try out [[https://caddyserver.com/| Caddy]], I've been curious for a while now and it might meet all my needs. Use official docs for a repo-packaged version: https://caddyserver.com/docs/install {{{ apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/gpg.key > /etc/apt/trusted.gpg.d/caddy-stable.asc curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt > /etc/apt/sources.list.d/caddy-stable.list apt update apt install caddy }}} This uses a systemwide config in `/etc/caddy/Caddyfile`, and acts as a generic HTTP server initially. It's serving up a Caddy landing page from `/usr/share/caddy` at http://illustrious.thighhighs.top/ == SSL cert == Pop it in `/etc/ssl` like usual. {{{ cd /etc/ssl/ # This is a one-time action openssl dhparam -out dhparams.pem 4096 # Then copy the cert and key and intermediate CA chain here cp KEY CERT /etc/ssl/ chgrp caddy /etc/ssl/STAR_* }}} = Smokeping = Run using the docker container, it's more convenient and separates config+data from the installation. https://hub.docker.com/r/linuxserver/smokeping Prepare space for data and config using a logical volume {{{ lvcreate -L 1G -n smokeping ubuntu-vg mkfs.ext4 /dev/ubuntu-vg/smokeping mkdir /data/smokeping # Add to fstab # Smokeping config and data /dev/disk/by-uuid/a40142d8-06e0-44d7-b8bc-a3e20662cde2 /data/smokeping ext4 defaults 0 1 mount /data/smokeping mkdir /data/smokeping/config mkdir /data/smokeping/data chown -R 1000:1000 /data/smokeping/config /data/smokeping/data }}} Run the container: {{{ docker run -d \ --name=smokeping \ --hostname=illustrious \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Australia/Sydney \ -p 127.0.0.1:8000:80 \ -v /data/smokeping/config:/config \ -v /data/smokeping/data:/data \ --restart unless-stopped \ lscr.io/linuxserver/smokeping }}} Map it through with some caddy config {{{ smokeping.thighhighs.top { reverse_proxy localhost:8000 tls /etc/ssl/STAR_thighhighs_top.crtbundled /etc/ssl/STAR_thighhighs_top.key } }}} Reload the config, and you should have a working smokeping again! \o/ {{{ systemctl reload caddy.service }}} = Web space for thighhighs domain = * Create LV for data {{{ lvcreate -L 1G -n www ubuntu-vg mkfs.ext4 /dev/ubuntu-vg/www mkdir /data/www ### Add to fstab # webdir /dev/disk/by-uuid/a40142d8-06e0-44d7-b8bc-a3e20662cde2 /data/www ext4 defaults 0 1 mount /data/www mkdir /data/www/illustrious chown -R furinkan. /data/www/illustrious }}} * Throw some content in there * Add a stanza to `/etc/caddy/Caddyfile` {{{ *.thighhighs.top { root * /data/www/illustrious file_server tls /etc/ssl/STAR_thighhighs_top.crtbundled /etc/ssl/STAR_thighhighs_top.key } }}} * Reload the config: `systemctl reload caddy` = NFS mount from NAS = Want to sort through my files, and the NAS is well setup for this usage. 1. `apt install nfs-common` 1. Let's mount it with systemd, create a new unit for the mount: `systemctl edit --force --full cargo.mount` {{{ [Unit] Description=cargo volume from iowa After=network.target [Mount] What=iowa.thighhighs.top:/volume1/cargo Where=/cargo Type=nfs Options=_netdev [Install] WantedBy=multi-user.target }}} 1. Mount it: `systemctl start cargo.mount`