11447
Comment: hit wordpress periodically for to make sure backups run
|
11983
added block storage to hold backups and whatever
|
Deletions are marked like this. | Additions are marked like this. |
Line 6: | Line 6: |
* 20gb of block storage attached, because the OS disk is too small for my site content * Official docs/guides here: https://www.vultr.com/docs/block-storage/ * attached as /dev/vdb * formatted as GPT, single partition, ext4 filesystem, mounted at /data {{{ UUID=def88a1b-85e9-461b-8833-ef1356de02fb /data ext4 defaults,noatime,nofail 0 0 }}} |
|
Line 178: | Line 184: |
I'm using the "Backup Migration" plugin for Wordpress to get a periodic dump of the site. How does it work? It slides into the normal request flow and performs the necessary actions if it determines they're due. But what happens if your site isn't trafficked enough? Then it'll always be late. | I'm using the "Backup Migration" plugin for Wordpress to get a periodic dump of the site: https://backupbliss.com/ It takes a weekly backup at Monday on 09:00 (Sydney time), and retains the last 8 backups. Backups are stored in `~blog/public_html/backups/` How does it work? It slides into the normal request flow and performs the necessary actions if it determines they're due. But what happens if your site isn't trafficked enough? Then it'll always be late. |
roberta
- debian bullseye (11.0) x64
- Vultr VPS Cloud Compute: 1x CPU, 1gb RAM, 25gb disk, $5/mon
- IPv6 is autoconfigured to a static address using the enp1s0 MAC address, good for DNS
- 20gb of block storage attached, because the OS disk is too small for my site content
Official docs/guides here: https://www.vultr.com/docs/block-storage/
- attached as /dev/vdb
formatted as GPT, single partition, ext4 filesystem, mounted at /data
UUID=def88a1b-85e9-461b-8833-ef1356de02fb /data ext4 defaults,noatime,nofail 0 0
Contents
what she hosts
- caddy, TLS frontend for:
- blog.meidokon.net
- astcd2.meidokon.net
- mikanya.meidokon.net
- despair.meidokon.net, despairfiles.meidokon.net
- kesakoi.meidokon.net
- caress.airtv.org, zalas.meidokon.net
- 765.agency
- tallgirls.info (expired domain)
- bismrk.tallgirls.info
- roberta.meidokon.net
- furinkan.meidokon.net
- moin.meidokon.net
- wiki.meidokon.net
- docker
- meidokon-moin
- php-fpm for the websites
- astcd2
- blog
- MariaDB for the websites
- blog
- despairworks_prod
- mikanya_prod
build process
- Deploy it, collect SSH hostkeys, login as root using your existing SSH key.
- Record IP addresses in DNS
Set timezone
timedatectl set-timezone Australia/Sydney
Set editor
echo "export EDITOR=vim" > /etc/profile.d/editor-vim.sh
Python
apt install python-is-python3
Disable HashKnownHosts
echo -e "Host *\n HashKnownHosts no" > /etc/ssh/ssh_config.d/99-global.conf
Make vim mouse-handling not annoying
cat <<EOF > /etc/vim/vimrc.local syntax on set background=dark set modeline set scrolloff=3 set mouse= set ttymouse= filetype plugin indent on EOF
Install packages
apt install ack jq make elinks nmap whois screen
Configure screen
curl -o ~/.screenrc https://gist.githubusercontent.com/barneydesmond/d16c5201ed9d2280251dfca7c620bb86/raw/.screenrc
Set FQDN
hostnamectl set-hostname roberta.meidokon.net
updatedb and reboot
updatedb reboot
tweak firewall
The ISP firewall will have things locked down already, but defence in depth is good.
ufw is already installed and permits only SSH, we need HTTP too.
ufw allow http ufw allow https ufw prepend allow from 2404:e80:42e3:0::/64 to any app SSH ufw prepend allow from 87.121.72.135/32 to any app SSH # Existing rule is too broad ufw delete allow 22
install apps
Infra apps
apt install imagemagick apt install mariadb-server
Caddy for HTTP, following official docs:
apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/gpg.key > /etc/apt/trusted.gpg.d/caddy-stable.asc curl -1sLf https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt > /etc/apt/sources.list.d/caddy-stable.list apt update apt install caddy
This uses a systemwide config in /etc/caddy/Caddyfile, and acts as a generic HTTP server initially. It's serving up a Caddy landing page from /usr/share/caddy at http://roberta.meidokon.net/
create user account
useradd -m -s /bin/bash blog su - blog mkdir -p ~/public_html/blog.meidokon.net echo "<?php phpinfo(); ?>" > ~/public_html/blog.meidokon.net/index.php
get php working
Install PHP packages, Debian 11 is on PHP 7.4
apt install php7.4-common php7.4-fpm php7.4-curl php7.4-mysql php7.4-xml php-imagick php7.4-cli php7.4-mbstring php7.4-zip php7.4-intl
- Create a PHP-FPM pool config
- cd /etc/php/7.4/fpm/pool.d
- cp www.conf blog.conf
Edit it up kinda like so:
[blog] user = blog group = blog listen = /run/php/php7.4-fpm-blog.sock listen.owner = caddy listen.group = caddy pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3
Restart php-fpm: systemctl restart php7.4-fpm.service
Setup a vhost in /etc/caddy/Caddyfile above the default vhost
roberta.meidokon.net { root * /home/blog/public_html/blog.meidokon.net file_server php_fastcgi unix//run/php/php7.4-fpm-blog.sock log { output file /var/log/caddy/blog.log } }
Reload the config: systemctl reload caddy
Now try reaching the domain, it should work, and magically have TLS working.
Wordpress
Running a vanilla Wordpress 5.9 for https://blog.meidokon.net/
Wordpress has come a long way, I'm genuinely impressed. The editor is fantastic now, the new Content Blocks scheme makes it a contender to Squarespace in my eyes, but you can self-host instead of using their cloud. Full control over the potential for speed and caching is very, very nice.
Grab https://wordpress.org/latest.tar.gz and unpack it to ~furinkan/public_html/blog.meidokon.net
Setup mysql DB, this is all muscle memory now
CREATE USER 'blog'@'localhost' IDENTIFIED BY 'ASecurePassword'; CREATE DATABASE blog; GRANT ALL PRIVILEGES ON blog.* TO 'blog'@'localhost' WITH GRANT OPTION;
Hit the domain and it'll ask you for setup credentials.
Trigger periodic backups
I'm using the "Backup Migration" plugin for Wordpress to get a periodic dump of the site: https://backupbliss.com/
It takes a weekly backup at Monday on 09:00 (Sydney time), and retains the last 8 backups. Backups are stored in ~blog/public_html/backups/
How does it work? It slides into the normal request flow and performs the necessary actions if it determines they're due. But what happens if your site isn't trafficked enough? Then it'll always be late.
This isn't really a problem, but I like my regular backups. I've setup a cronjob owned by the blog user to hit the frontpage every hour like so:
# m h dom mon dow command 0 * * * * wget -O /dev/null --quiet --timeout=60 --header='X-Purpose: trigger a periodic backup' https://blog.meidokon.net/
Tune PHP for uploads etc
Thanks to this page: https://www.kasareviews.com/fix-upload-max-filesize-wordpress-error/
Edit /etc/php/7.4/fpm/php.ini and set:
post_max_size = 32M upload_max_filesize = 20M
Then restart the php-fpm service.
Importing a wordpress site
Assuming you do the XML dump, that gets you most of the way there. Assuming the source site is still up, it'll download all the media from the old site and bring it over. That rocks!
What you still need to do:
- Download themes and activate
- Customise the active theme
- Colours etc
- Assign menus (retained) to location in theme
- Install the same plugins
- Activate them and enable auto-updates
Import astcd2 wordpress
Create user account
useradd -s /bin/bash -m astcd2 su - astcd2 mkdir -p ~/public_html/mikanya.meidokon.net/ echo "<?php phpinfo(); ?>" > ~/public_html/mikanya.meidokon.net/index.php
- Copy password hash from old server
Copy authorized_keys from old server
Setup new PHP FPM pool, duplicate existing file and change these settings
[poolname] user = group = listen =
Reload the config: systemctl reload php7.4-fpm.service
Setup a vhost in /etc/caddy/Caddyfile above the default vhost
astcd2.meidokon.net { root * /home/astcd2/public_html/mikanya.meidokon.net file_server php_fastcgi unix//run/php/php7.4-fpm-astcd2.sock log { output file /var/log/caddy/astcd2.log } }
Reload the config: systemctl reload caddy
- Setup mysql database for the site
Copy ~/.my.cnf from old server, chmod 0600
Create the DB with same credentials
CREATE USER 'astcd2'@'localhost' IDENTIFIED BY 'ASecurePassword'; CREATE DATABASE mikanya_prod; GRANT ALL PRIVILEGES ON mikanya_prod.* TO 'astcd2'@'localhost' WITH GRANT OPTION;
- Take backups of the site on the old server, then rsync them to new server as astcd2
Import DB dump:
gunzip 2022-01-31_mikanya_db_backup.gz mv 2022-01-31_mikanya_db_backup 2022-01-31_mikanya_db_backup.sql mysql mikanya_prod < 2022-01-31_mikanya_db_backup.sql
Copy the web content over
rsync -avx arkroyal:/home/astcd2/public_html/mikanya.meidokon.net/ ./public_html/mikanya.meidokon.net/
- Patch up DNS to point at the new server
Go poke /wp-admin/ and make it happy. We need to fix image loading because it's using the old domain name, with http instead of https
Also need to fix the google analytics, which loads using http://www.blah, we should make that //www.blah
765.agency
- User account furinkan already exists, su to them
Create webhome dir: mkdir -p ~/public_html/765.agency
Copy content
rsync -avx arkroyal:/home/furinkan/public_html/765.agency/ /home/furinkan/public_html/765.agency/
Copy cloudflare origin certs, as root
rsync -avx arkroyal:/etc/ssl/private/cloudflare* /etc/ssl/private/ * Update DNS for 765.agency at Cloudflare DNS mgmt * Add domain to Caddy to serve up the homedir as static content {{{ 765.agency { root * /home/furinkan/public_html/765.agency file_server tls /etc/ssl/private/cloudflare_origin-765.agency.crt /etc/ssl/private/cloudflare_origin-765.agency.key }
Fix cert perms so caddy can read them
chgrp caddy /etc/ssl/private/cloudflare_origin-765.agency.* chmod g+r /etc/ssl/private/cloudflare_origin-765.agency.key chgrp caddy /etc/ssl/private/ chmod g+rx /etc/ssl/private/
Reload the config: systemctl reload caddy
Trying docker for moinmoin wiki
apt install docker.io
Hack up the dockerfile, and you know what just fork and tweak the whole thing: https://github.com/barneydesmond/moinmoin-wiki-docker
make build
useradd -m -s /bin/bash moin usermod -aG docker moin su - moin mkdir ~/meidokon_wiki docker run -e TZ=Australia/Sydney -e MOIN_UID=1003 -e MOIN_GID=1003 -d -p 8000:80 -v /home/moin/meidokon_wiki:/usr/local/share/moin/data --name meidokon_wiki meidokon-moin
Migrate the content over now
[root@roberta:~] rsync -avx arkroyal:/home/moin/moin_instance/data/ /home/moin/meidokon_wiki/
Kill and rm the container, then run it again as moin user
Yeah this seems to work. Jump into the container and clean the cache
moin --config-dir=/usr/local/share/moin/wikiconfig.py maint cleancache
Flip the DNS over to make it work, updating your caddy config and reloading it.
Yeah it looks surprisingly good, I even fixed up the light novel title generator too. You can reload the python runtime by doing:
root@roberta:~# make -C ~/git/moinmoin-wiki-docker/ shell uwsgi --reload /run/uwsgi-moinmoin.pid logout
Little Makefile for management
We want the process to run as the moin user, and it needs to come up after reboot.
IMAGENAME := meidokon-moin DATADIR := /home/moin/meidokon_wiki HTTP_PORT := 8000 RUNNING_CONTAINER_NAME := meidokon_wiki TZ := Australia/Sydney MOIN_UID := 1003 MOIN_GID := 1003 run: -docker kill $(RUNNING_CONTAINER_NAME) sleep 2 -docker container rm meidokon_wiki docker run -e TZ=$(TZ) -e MOIN_UID=$(MOIN_UID) -e MOIN_GID=$(MOIN_GID) -d --restart unless-stopped -p $(HTTP_PORT):80 -v $(DATADIR):/usr/local/share/moin/data --name $(RUNNING_CONTAINER_NAME) $(IMAGENAME) shell: docker exec -it meidokon_wiki /bin/bash cleancache: # Get a shell with `make shell` then run: # moin --config-dir=/usr/local/share/moin/wikiconfig.py maint cleancache reload-python-runtime: # Get a shell with `make shell` then run: # uwsgi --reload /run/uwsgi-moinmoin.pid # logout