Meidokon Wiki
  • Comments
  • Immutable Page
  • Menu
    • Navigation
    • RecentChanges
    • FindPage
    • Local Site Map
    • Help
    • HelpContents
    • HelpOnMoinWikiSyntax
    • Display
    • Attachments
    • Info
    • Raw Text
    • Print View
    • Edit
    • Load
    • Save
  • Login

Useful(?) links

  • furinkan's stuff

  • Postfix snippets


  • SystemInfo

  • This sidebar

Navigation

  • FrontPage
  • RecentChanges
  • FindPage
  • HelpContents
Revision 27 as of 2022-11-03 08:08:41
MeidokonWiki:
  • servers
  • illustrious
  • Running at home, general purpose server
  • Intel NUC (BOXNUC7i3BNH)
    • Core i3-7100U at 2.40GHz
    • 500GB WD Blue SN550 NVMe M.2
    • 8gb DDR4 2400MHz Kingston KVR24S17S8/8
  • Ubuntu 21.10

Contents

  1. users
  2. services
  3. docker containers
  4. build notes
    1. basic env
    2. Configure networking
    3. Setup clevis for automated decrypt on boot
    4. Docker Engine for services
      1. Prepare space for volumes
      2. Enable IPv6 for containers
      3. NDP Proxy ndppd
    5. Network tuning
  5. Reverse proxy for services
    1. Proxy software
    2. SSL cert
  6. Smokeping
  7. Unifi controller
    1. Prepare space
    2. Run container
    3. Migration
    4. TLS cert for unifi
  8. Web space for thighhighs domain
  9. NFS mount from NAS

users

  • furinkan

services

  • ssh
  • http/s
    • https://thighhighs.top/

  • Smokeping (available on internal IP, and IPv6)

  • Unifi controller

docker containers

  • elasticsearch - for UMAD development
  • pihole-exporter - fetches pihole stats, publishes for prometheus: https://hub.docker.com/r/ekofr/pihole-exporter

  • prometheus - receive stats and store them
  • smokeping
  • unifi-controller

build notes

On 2022-01-02, FDE on LVM, Ubuntu server 21.10

Had some problem with the curtin unpack of the base image, I changed the Ubuntu archive URL to Datamossa in AU and hey presto it worked. Guess something was corrupted, shrug.

  • Mostly default settings
  • Enable full disk encryption with LVM, default settings
  • Enable sshd during install

basic env

  • Install your authorized_keys into root and furinkan user

  • Enable NOPASSWD sudo

    %sudo   ALL=(ALL:ALL) NOPASSWD: ALL
  • Set hostname:

    hostnamectl set-hostname illustrious.thighhighs.top
  • Set timezone

    timedatectl set-timezone Australia/Sydney
  • Set editor

    echo "export EDITOR=vim" > /etc/profile.d/editor-vim.sh
  • Disable HashKnownHosts

    echo -e "Host *\n    HashKnownHosts no" > /etc/ssh/ssh_config.d/99-global.conf
  • Configure screen

    curl -o ~/.screenrc https://gist.githubusercontent.com/barneydesmond/d16c5201ed9d2280251dfca7c620bb86/raw/.screenrc
  • Configure top by entering this cheatcode

    z x c b s 1.5 <Enter>
    e <zero> 1 W q
  • Fix locales, select en_AU.UTF-8: dpkg-reconfigure locales

  • Disable console blanking, seems this is already done by default: cat /sys/module/kernel/parameters/consoleblank

    • Already set to zero means it shouldn't blank
  • Disable wifi and bluetooth, we don't need them and it slows down boot

    systemctl disable wpa_supplicant.service --now
    systemctl disable bluetooth.target --now
  • Install useful packages

    apt update
    apt install -y vim screen bash-completion lsof tcpdump netcat strace nmap less bsdmainutils tzdata whiptail netbase wget curl python-is-python3 net-tools ack jq make elinks nmap whois ethtool bind9-dnsutils apt-utils man-db plocate
  • Do a full upgrade, index the system for locate, then reboot

    apt full-upgrade
    updatedb
    reboot

Configure networking

Use netplan for this, it's convenient and easy.

cd /etc/netplan/
mv 00-installer-config.yaml 00-installer-config.yaml.disabled
vim 10-thighhighs.yaml

network:
    version: 2

    ethernets:
        eno1:
            critical: true
            dhcp-identifier: mac
            dhcp4: false
            dhcp4-overrides:
                use-dns: false
            dhcp6: true
            dhcp6-overrides:
                use-dns: false
            ipv6-privacy: false
            addresses:
                - "192.168.1.12/24"
                # :12 for the .1.12 IPv4
                - "2404:e80:42e3:0:12:0:0:12/64"
            routes:
                - to: 0.0.0.0/0
                  via: 192.168.1.1
                  on-link: true
            nameservers:
                addresses:
                    - 192.168.1.20
                    - 192.168.1.24
                    - fe80::e65f:1ff:fe1c:c6ea
                    - fe80::ba27:ebff:fe8c:f4f8
                search:
                    - thighhighs.top

Try applying it with netplan try, see if your SSH session still works, then go ahead and reboot if it's good.

Setup clevis for automated decrypt on boot

  • apt install clevis-luks
  • Bind the volume to the Tang servers

    clevis luks bind -d /dev/nvme0n1p3 sss '{"t": 1, "pins": {"tang": [{"url": "http://ocular.thighhighs.top:8888"},{"url": "http://funicular.thighhighs.top:8888"}]}}'
  • apt install clevis-initramfs

Test by rebooting.

Docker Engine for services

Run with the official docs: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository

Prep repo

apt install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list

Install packages

apt update
apt install docker-ce docker-ce-cli containerd.io

Test that it's working

docker run hello-world

Setup log rotation in /etc/docker/daemon.json

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "10"
  }
}

Prepare space for volumes

We'll use this later for dockerised apps.

mkdir /data

Enable IPv6 for containers

This doesn't happen out of the box, so we need to do it ourselves.

Here's the official docs: https://docs.docker.com/config/daemon/ipv6/

The notes immediately below are if you don't have IPv6 connectivity to the internet

If you have functioning IPv6, like with a delegated prefix, you'll need to choose a subnet inside that prefix. More notes below in the ndppd section.

Amend the config in /etc/docker/daemon.json like so, adding the IPv6 keys:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "10"
  },
  "ipv6": true,
  "fixed-cidr-v6": "fd21:1268:04a5:d0c::/64"
}

We're using a subnet with unique local addressing. I've carefully chosen a subnet that's unique, and ensured it's a good soize - the subnet needs to be at least /80 or larger so it can jam the 48-bit virtual MAC address into it at the end.

Here's some code to generate a good prefix:

   1 import random
   2 h = bytearray(random.randbytes(5)).hex()
   3 print(f"fd{h[0:2]}:{h[2:6]}:{h[6:10]}:d0c::/64")
   4 
   5 fd21:1268:04a5:d0c::/64

The fc00::/7 space is recommended to be carved into /64 subnets for your site like so:

  • Use fd00/8 as the stem ('d'/1101 indicates locally-assigned addresses)
  • Choose 40 bits using high quality randomness as the unique site code
  • Then you have 16 bits to use as subnet IDs, I'm using :d0c: for obvious reasons.

Then systemctl restart docker, and now it should Just Work I guess. Your docker0 bridge will get the additional /64 subnet on it, and your containers will get an autoassigned IPv6 address.

For local-only purposes this should be sufficient, but you probably won't be able to get off the docker host and reach other machines on the segment. To do this you either need to NAT (eww), or get the host to respond on behalf of the bridge subnet for NDP requests (IPv6 equivalent of ARP).

What I can see at this point is that packets go out fine, but can't make it back.

  • ping6 requests leave the container with nexthop = fd21:1268:4a5:d0c::1
  • docker host (fd21:1268:4a5:d0c::1) gets the packet and forwards it out eno1 towards the target
  • Target receives ping request, sends reply to fd21:1268:4a5:d0c:0:242:ac11:2
  • Reply is routed to the default gateway, and this is the sticking point now

I think we need an NDP proxy...

NDP Proxy ndppd

Problem is as described here, as as above: https://forums.docker.com/t/ipv6-not-working/10171/7

And here's a guide mentioning using ndppd: https://medium.com/@skleeschulte/how-to-enable-ipv6-for-docker-containers-on-ubuntu-18-04-c68394a219a2

apt install ndppd
cp /usr/share/doc/ndppd/ndppd.conf-dist /etc/ndppd.conf
vim /etc/ndppd.conf

You need to select the correct subnet here. If you've got public routable address space like I do, you need to carve out a small subnet. My LAN is a /64 subnet, so I'm using a /80 chunk of that and giving it to docker. You'll need to enter the same /80 subnet in /etc/docker/daemon.json as mentioned above.

Tweak up the rule for our subnet and interfaces:

  proxy eno1 {
    rule fd21:1268:04a5:d0c::/64 {
      # Either of these two would work fine, the default is auto
      auto
      iface docker0
    }
  }

Restart ndppd and it should answer NDP queries now.

Whew! All this just so that smokeping can reach out to IPv6 hosts outside the docker host.

==== Problems you might have ===

I couldn't get this working at first. What I determined was going on is that other machines don't even bother doing an NDP query when I try to ping anything in the docker range. I think they look at the destination, realise it couldn't possibly be in the same subnet/LAN as them, so they go via layer 3 (the router) instead of sticking layer 2.

Which means we need to be in the same /64 subnet as the rest of the LAN, a sibling to the docker host but living inside the docker host. They'll be hidden behind the bridge, and ndppd will make them visible to the rest of the network. I've grabbed 2404:e80:42e3:0:d0c::/80 and that works now, other hosts will find it with NDP.

The nice thing is that I could even have other docker hosts on the network and use the same config. They'll generate different virtual MACs for each container, and they'll live in the same /80 subnet. Containers on host A wouldn't be able to find containers on host B, but non-docker hosts would be able to find any docker container in that /80 subnet, because ndppd will only answer a query if it owns that container with the requested address.

Network tuning

Caddy complains that it can't set a large receive buffer for network connections:

Nov 03 11:22:19 illustrious.thighhighs.top caddy[1601301]: {"level":"info","ts":1667434939.2639456,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}

So let's increase it.

echo -e "# Increase max recv buffer to about 2.5MiB\nnet.core.rmem_max=2500000" >> /etc/sysctl.d/20-network-recv-buffer.conf

Reverse proxy for services

Proxy software

Let's try out Caddy, I've been curious for a while now and it might meet all my needs.

Use official docs for a repo-packaged version: https://caddyserver.com/docs/install

apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/gpg.key        > /etc/apt/trusted.gpg.d/caddy-stable.asc
curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt > /etc/apt/sources.list.d/caddy-stable.list

apt update
apt install caddy

This uses a systemwide config in /etc/caddy/Caddyfile, and acts as a generic HTTP server initially. It's serving up a Caddy landing page from /usr/share/caddy at http://illustrious.thighhighs.top/

SSL cert

Pop it in /etc/ssl like usual.

cd /etc/ssl/
# This is a one-time action
openssl dhparam -out dhparams.pem 4096

# Then copy the cert and key and intermediate CA chain here
cp KEY CERT /etc/ssl/
chgrp caddy /etc/ssl/STAR_*

Smokeping

Run using the docker container, it's more convenient and separates config+data from the installation.

https://hub.docker.com/r/linuxserver/smokeping

Prepare space for data and config using a logical volume

lvcreate -L 1G -n smokeping ubuntu-vg
mkfs.ext4 /dev/ubuntu-vg/smokeping
mkdir /data/smokeping

# Add to fstab
# Smokeping config and data
/dev/disk/by-uuid/a40142d8-06e0-44d7-b8bc-a3e20662cde2 /data/smokeping ext4 defaults 0 1

mount /data/smokeping
mkdir /data/smokeping/config
mkdir /data/smokeping/data
chown -R 1000:1000 /data/smokeping/config /data/smokeping/data

Run the container:

docker run -d \
  --name=smokeping \
  --hostname=illustrious \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Australia/Sydney \
  -p 127.0.0.1:8000:80 \
  -v /data/smokeping/config:/config \
  -v /data/smokeping/data:/data \
  --restart unless-stopped \
  lscr.io/linuxserver/smokeping

Map it through with some caddy config

smokeping.thighhighs.top {
        reverse_proxy localhost:8000

        tls /etc/ssl/STAR_thighhighs_top.crtbundled /etc/ssl/STAR_thighhighs_top.key
}

Reload the config, and you should have a working smokeping again! \o/

systemctl reload caddy.service

Unifi controller

Move it to a slightly beefier machine, running it via Docker for cleanliness. Their insistence on only supporting Java 8 runtime is a nightmare, but perfect for Docker abstraction.

Prepare space

This is a somewhat bigger system that needs more diskspace

lvcreate -L 4G -n unifi ubuntu-vg
mkfs.ext4 /dev/ubuntu-vg/unifi
mkdir /data/unifi

## Add to fstab, use blkid to find the UUID
# Unifi controller data
/dev/disk/by-uuid/0a13b90e-904a-4803-896f-0f82e4a36518 /data/unifi ext4 defaults 0 1

mount /data/unifi
mkdir /data/unifi/config
chown -R 1000:1000 /data/unifi/config

Run container

docker run -d \
  --name=unifi-controller \
  -e PUID=1000 \
  -e PGID=1000 \
  -e MEM_LIMIT=1024 \
  -e MEM_STARTUP=1024 \
  -e TZ=Australia/Sydney \
  -p 192.168.1.13:8443:8443 \
  -p 192.168.1.13:3478:3478/udp \
  -p 192.168.1.13:10001:10001/udp \
  -p 192.168.1.13:8080:8080 \
  -p 192.168.1.13:1900:1900/udp \
  -p 192.168.1.13:8843:8843 \
  -p 192.168.1.13:8880:8880 \
  -p 192.168.1.13:6789:6789 \
  -p 192.168.1.13:5514:5514/udp \
  -v /data/unifi/config:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/unifi-controller:latest

Migration

Import the backup from old controller, then on the old controller switch the inform URL to the new IP address. We'll fix up DNS afterwards.

https://community.ui.com/questions/Replace-my-RPi-Controller-to-UC-CK-G2-plus-and-odd-behavior-in-the-Topology-view/36f5b1eb-ca15-46f2-a64b-c69d9628857e

Set the inform URL again on the new controller, because you've just restored a backup with the old IP.

TLS cert for unifi

Faff with the keystore so you can jam in your publicly signed cert. This is a script that I found and adapted.

Convert your normal PEM-format cert into a PKCS12 container. I don't understand all this but it works. This script needs to live inside a directory that's mapped into the dockerised unifi controller container (or at the very least, it needs to dump the .p12 file into a mapped directory).

# Hacked together by Barney Desmond on 2022-11-03
#
# This assumes you're running a containerised Unifi Controller, but the SSL
# cert lives outside the container initially. We will repack it into a .p12
# file, then inside the container we'll import it into the Java keystore using
# the unifi-import-cert.sh script.

# This is the host-side of a Docker volume, you need to run this outside the
# container.
UNIFI_CONFIGDIR=/data/unifi/config
CERTFILE=STAR_thighhighs_top.p12

# Backup old .p12 file
cp -a "${UNIFI_CONFIGDIR}/${CERTFILE}" "${UNIFI_CONFIGDIR}/${CERTFILE}.backup.$(date +%F_%R)"

# Convert cert to PKCS12 format
# Ignore warnings
# Turns out we need to enable legacy mode, because Unifi's keytool can't read
# the new OpenSSL 3.0.2 encryption.
# https://community.ui.com/questions/New-Openssl-v3-may-break-your-controller-network-application-keystore/2e4133d9-d6dd-4a22-acfe-e5d671ffaee4
openssl pkcs12 -export -legacy \
        -inkey /etc/ssl/STAR_thighhighs_top.key \
        -in /etc/ssl/STAR_thighhighs_top.crt \
        -out "${UNIFI_CONFIGDIR}/${CERTFILE}" \
        -name unifi -password pass:unifi

cat <<EOF
Now go import the cert into the keystore, from inside the running container.
docker exec -it unifi-controller /bin/bash
EOF

Then you run this script inside the container. This script lives inside the mapped directory for convenience.

# From https://util.wifi.gl/unifi-import-cert.sh which is now dead
# Modified by Barney Desmond on 2021-04-20 to just use a normal static paid-for cert.

# Author: Frank Gabriel, 01.01.2019
# Credits Kalle Lilja, @SprockTech and others
# Script location: /etc/letsencrypt/renewal-hooks/post/unifi-import-cert.sh (important for auto renewal)
# Tested with Debian 9 and UniFi 5.8.28, 5.9.22 and 5.9.32 - should work with any recent Unifi and Ubuntu/Debian releases

# This is where the keystore lives inside the container
UNIFI_DATADIR=/config/data

# Backup previous keystore
cp -a "${UNIFI_DATADIR}/keystore" "${UNIFI_DATADIR}/keystore.backup.$(date +%F_%R)"
#cp -a /var/lib/unifi/keystore /var/lib/unifi/keystore.backup.$(date +%F_%R)

# Install certificate
# Ignore warnings
keytool -importkeystore \
        -deststorepass aircontrolenterprise \
        -destkeypass aircontrolenterprise \
        -destkeystore "${UNIFI_DATADIR}/keystore" \
        -srckeystore STAR_thighhighs_top.p12 \
        -srcstoretype PKCS12 \
        -srcstorepass unifi \
        -alias unifi \
        -noprompt

Now you restart the docker container after running that, so it picks up the new cert and uses it.

Web space for thighhighs domain

  • Create LV for data

    lvcreate -L 1G -n www ubuntu-vg
    mkfs.ext4 /dev/ubuntu-vg/www
    mkdir /data/www
    
    ### Add to fstab
    # webdir
    /dev/disk/by-uuid/a40142d8-06e0-44d7-b8bc-a3e20662cde2 /data/www ext4 defaults 0 1
    
    mount /data/www
    mkdir /data/www/illustrious
    chown -R furinkan. /data/www/illustrious
  • Throw some content in there
  • Add a stanza to /etc/caddy/Caddyfile

    *.thighhighs.top {
            root * /data/www/illustrious
            file_server
            tls /etc/ssl/STAR_thighhighs_top.crtbundled /etc/ssl/STAR_thighhighs_top.key
    }
  • Reload the config: systemctl reload caddy

NFS mount from NAS

Want to sort through my files, and the NAS is well setup for this usage.

  1. apt install nfs-common

  2. Let's mount it with systemd, create a new unit for the mount: systemctl edit --force --full cargo.mount

    [Unit]
    Description=cargo volume from iowa
    After=network.target
    
    [Mount]
    What=iowa.thighhighs.top:/volume1/cargo
    Where=/cargo
    Type=nfs
    Options=_netdev
    
    [Install]
    WantedBy=multi-user.target
  3. Mount it: systemctl start cargo.mount

  • MoinMoin Powered
  • Python Powered
  • GPL licensed
  • Valid HTML 4.01
MoinMoin Release 1.9.11 [Revision release], Copyright by Juergen Hermann et al.