Docker and Server Apps

Docker and Server Apps #

The first part of this page is about Docker and my best practices. The second part is about the apps, which I have installed on my server. I install almost all my apps via docker.

Docker #

Docker website

Install #

You can install docker via the offical website or as packages from your package manager.

  • I would recommend going with the official website for docker engine.
  • There are also small difference in the commands. docker compose vs docker-compose for example if using the one from your package manager.

64 bit

  • I use a 64 bit installation for my RasPi and consequently also use the 64 bit installation of docker.
  • This also has advantages, since some programs only run on 64 bit.
  • Consequently use the 64 bit debian repo guide - use the convienience script, it is working for RasPi if you have the 64 bit version

Steps, manually:

  • add repo
  • install docker via repo
  • check installation
  • docker run hello-world You can add users to docker group (usermod -aG docker pi), if users shall be able to execute docker without sudo. Note: This is not recommended, since this is dangerous, as normal users can get root access this way! Better execute all docker commands as root only!

Docker compose vs Docker run #

Docker compose

  • Docker run is the command line version to start single containers via a command with flags added as arguments
  • Docker compose uses a docker compose file (docker-compose.yml) where your docker configuration is configured and stored
  • This has the advantage
    • to start multiple containers at once
    • to have all container configurations always stored
    • to be able to easily update the containers (stop, update, start)

Converter to convert docker run command into docker compose script

I will only use docker compose scripts!

My Docker Folder Structure #

Also read the getting started

  • And the part about docker volumes
    • These are used to store data in your containers between restarts of your containers
    • You can think about it like the hard drive of your PC To manage all my programs as containers in a convenient way, I use the following file structure
  • I store every data related to my docker containers in the folder /opt/docker
  • Within this folder I create sub folders for different program groups, like a folder /opt/docker/info for system containers like #Portainer and #Docker Notify Diun
  • Within those sub folders, I have a docker compose file, which stores the configuration for this container group
  • In this folder are also further subfolders, which represent the volumes (“hard drives”) of my containers
  • This way all data related to these containers is stored within this folder
    • Except for some volume mounts, which are necessary if the container needs access to specific host OS folders; But these folders are normally only used to access data and not to write data
  • I also use environment files to store my passwords, such that I do not have to specify those within the docker compose file
    • So next to the docker compose file, I have a .env file, which contains my passwords like POSTGRES_PASSWORD=pw12345678...
    • In the docker compose file, I can then specify the password like this: ${POSTGRES_PASSWORD}
  • Finally, if I want to transfer this program to another system, I simply have to tar the whole folder and untar it on the other system, all data is preserved and transferred this way :)

Example for a docker compose file, where we have one host OS folder mounted, which is needed for some functionality; But the other folder is the data folder of the container, where real data is stored between restarts and this is mounted to a subfolder:

services:
	portainer:
		image: portainer/portainer-ce:latest
		container_name: portainer
		ports:
			- 8000:8000
			- 9443:9443
		volumes:
			- /var/run/docker.sock:/var/run/docker.sock
			- ./portainer:/data
	restart: always

Update Containers #

You can put the following script into your home directory on your RasPi and execute it every time you want to update all containers by using the docker compose functionalities

  • You can get notifications if new container updates are available, see further below for how to do that #Docker Notify Diun
  • The script will cd to the path where your docker compose files are located and then execute docker compose pull to get the latest images and docker compose up -d to update the containers; It will also display the container logs and you can use grep to filter the logs for specific keywords, like “error” or so
  • It will also start the package updates
#!/bin/bash

# Docker update function
docker_update() {
echo ""
echo "###"
echo "Start docker update"
echo "###"
docker compose pull
docker compose up -d
sleep 5
docker compose logs -t --since 168h | grep -i err | grep -v -e "PHP_ERROR_LOG" -e "OCSP stapling"

}

# Main

# Docker
cd /opt/docker/dns
docker_update
cd /opt/docker/info
docker_update
cd /opt/docker/apps
docker_update

echo ""
echo "###"
echo "Docker Prune"
echo "###"
docker image prune -f

# APT
echo ""
echo "###"
echo "APT"
echo "###"
apt update && apt upgrade && apt autoremove

You can also use this script to restart all containers, if there are some problems during the update

#!/bin/bash

# Docker update function
docker_restart() {
echo ""
echo "###"
echo "Start docker restart"
echo "###"
docker compose down
docker compose up -d
sleep 1

}

# Main

# Docker
cd /opt/docker/dns
docker_restart
cd /opt/docker/info
docker_restart
cd /opt/docker/apps
docker_restart

Volume Backups #

Option 1 #

  • I use Kopia to do incremental backups of the whole container folders
  • If there are databases, I additionall dump those via the respective commands and then also use Kopia to back up those dumped files
  • You can find an explanation to this Kopia backup process here

Option 2 #

Theoretically, it is better to let docker handle everything and only access the files in the way the docker container sees those, so more complicated but better if data is distributed somewhere on host file system. Official guide

However, I do not do this, since it does not have an advantage for me.

Backup data example for vaultwarden and caddy:

docker run --rm --volumes-from vaultwarden -v /opt/docker/backup:/backup vaultwarden/server tar cvf /backup/backup_vaultwarden.tar /data

docker run --rm --volumes-from caddy -v /opt/docker/backup:/backup caddy:2 tar cvf /backup/backup_caddy.tar /data
  • mounts volume of container vaultwarden
  • additional mounts the current directory as backup folder in new container
  • uses same docker image as the one to do the backup from
  • executes command in new container: tar with where to place (in my mounted new volume) and the one to tar (the location of the old container which shall be backed up)

This script tries to automate this process:

#!/bin/bash
# This script allows you to backup a single volume from a container
# Data in given volume is saved in the current directory in a tar archive.
CONTAINER_NAME=$1
VOLUME_NAME=$2

usage() {
	echo "Usage (backup will be created in current folder): $0 [container name] [volume name]"
	exit 1
}

if [ -z $CONTAINER_NAME ]
then
	echo "Error: missing container name parameter."
	usage
fi

if [ -z $VOLUME_NAME ]
then
	echo "Error: missing volume name parameter."
	usage
fi

echo ""
echo "### Backup started for $CONTAINER_NAME : $VOLUME_NAME ###"
echo ""
sudo docker run --rm --volumes-from $CONTAINER_NAME -v $(pwd):/backup busybox tar cvf /backup/backup.tar $VOLUME_NAME

This script tries to automate the process of restoring a backup:

#!/bin/bash
# This script allows you to restore a single volume from a container
# Data in restored in volume with same backupped path
NEW_CONTAINER_NAME=$1
BACKUP_NAME=$2

usage() {
	echo "Usage only from within the folder containing the backup: $0 [container name] [backup name]"
	exit 1
}

if [ -z $NEW_CONTAINER_NAME ]
then
	echo "Error: missing container name parameter."
	usage
fi

if [ -z $BACKUP_NAME ]
then
	echo "Error: missing backup name parameter."
	usage
fi

sudo docker run --rm --volumes-from $NEW_CONTAINER_NAME -v $(pwd):/backup busybox tar xvf /backup/$BACKUP_NAME

This script uses the previous backup scripts to do multiple backups of specific containers:

#!/bin/bash
# Backup all my containers

# ./docker_backup.sh container_name container_folder
# mv -f backup.tar backup_container_name_container_folder.tar

cd /opt/docker/backup

# Portainer
./docker_backup.sh portainer data
mv -f backup.tar backup_portainer.tar
# Vaultwarden
./docker_backup.sh vaultwarden data
mv -f backup.tar backup_vaultwarden.tar
# Caddy
./docker_backup.sh caddy etc/caddy/Caddyfile
mv -f backup.tar backup_caddy_file.tar
./docker_backup.sh caddy config
mv -f backup.tar backup_caddy_config.tar
./docker_backup.sh caddy data
mv -f backup.tar backup_caddy_data.tar
# Pi-hole
./docker_backup.sh Pi-hole etc/Pi-hole
mv -f backup.tar backup_Pi-hole_etc.tar
./docker_backup.sh Pi-hole etc/dnsmasq.d
mv -f backup.tar backup_Pi-hole_dnsmasq.tar
# Diun
./docker_backup.sh diun etc/diun/diun.yml
mv -f backup.tar backup_diun_yml.tar
./docker_backup.sh diun data
mv -f backup.tar backup_diun_data.tar

My Docker Network Structure #

  • I have many containers, which are somehow accessing the internet or providing sockets to be accessed via the Internet (like Nextcloud and so on)
    • To manage the access between containers and the internet, I use one single entry point, my caddy (also see the caddy guide later) web server, which is used as a reverse proxy to forward all queries to the correct containers
  • The different containers also have to communicate with each other

To solve these two problems, I use the docker network functionality

Container Networks #

If two containers have to interact with each other, I specify a network within the docker compose file. E.g. if one container needs to have access to another data base container, both are getting the same network:

...
mariadb:
	...
	networks:
		- db
...

nextcloud:
	...
	networks:
		- db
...

networks:
	db:

And then there is one special case, which is my caddy reverse proxy server. Since I have plenty of different containers started within different docker compose files, I create an external network once manually, where every container will be part of, that wants to be accessible from the internet via some domain.

  • Create the caddy network manually via a command:
    • docker network create --subnet=172.90.0.0/16 caddy
    • I also specified a subnet, which is sometimes necessary, if some containers need a static IP address (is interesting later for some containers)
  • And now I can specify within the docker compose files, that these containers also want to be part of this network
...
mariadb:
	...
	networks:
		- db
...

nextcloud:
	...
	networks:
		- db
		- caddy
...

networks:
	db:
	caddy:
		external: true

Reverse Proxy #

Normally containers would open some ports to the host OS, such that they are accessible from the internet, like this:

...
nextcloud:
	...
	ports:
		- 80:80
		- 443:443
...

However, the only container with open ports that I have, is my caddy container. Since the caddy container is in the same network as e.g. the Nextcloud container, it can still directly access the Nextcloud container, without the need of open ports. Consequently, it can forward traffic arriving for a specific domain, directly to the Nextcloud container on e.g. port 80. Check out the guide on caddy further below. To get an understanding, a caddy file would look like this:

https://cloud.my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	rewrite /.well-known/carddav /remote.php/dav
	rewrite /.well-known/caldav /remote.php/dav

	reverse_proxy nextcloud:80
}

We can see, that I can directly forward all traffic arriving for cloud.my-domain.de to the container named nextcloud on port 80

There are also containers, which shall only be accessible in my local network. Check out the guide about Pi-hole how I use local DNS to access those containers nicely. Furthermore, I also use reverse proxies for those containers. This would look like this (but also check the caddy guide):

https://portainer.pi4.lan:443 {
	tls internal
	reverse_proxy portainer:9000
}

Other Network Information #

Some information, I often forget.

How to create a network within docker compose and specify the subnet:

networks:
	my-network:
		ipam:
			config:
			 - subnet: 172.25.0.0/16

Docker Commands #

  • docker compose up -d
  • docker compose down
  • docker compose logs -f Display logs since start of container
  • docker compose pull

Commands in container

  • docker exec -it container_name ls Execute a command within the container
  • docker exec -it container_name /bin/bash Open a command shell on container

Docker Images #

  • You can create your own docker images and modify existing once
  • Use case might be that you want to add a specific program to a docker image
    • Like having ffmpeg installed to the nextcloud docker container, since you need it to display videos better
  • Little guide on modifying docker images
  • Manual for build in docker compose
  • I simply create a Dockerfile in a folder named image next to my docker-compose.yml file
  • Now I fill the dockerfile with content
FROM nextcloud:25

RUN apt-get update
RUN apt-get install -y ffmpeg
  • And adjust the docker-compose.yml file
    • replace the image: part with your own custom building
    build:
      context: ./image
      dockerfile: Dockerfile
      pull: true
  • Pull is specified to always pull the latest images within the dockerfile as a base image
    • In this case nextcloud
  • Now you can start your program as usual with
    • docker compose up -d
    • it is automatically pulling the base image and building the new custom image

Apps #

This section documents the apps, that I run on my RasPi servers via docker

  • Note: All containers, which rely on being accessible from the internet, are part of the caddy network and consequently the presented compose files only work if you have the same setup as I described in #My Docker Network Structure.
  • I also add a small snippet of the relevant part from the caddy file to every app, such that you can easily set up your reverse proxies

Caddy #

Caddy is the webserver, that I use to redirect all internet queries via reverse proxies. You can also use nginx or Apache, but I prefer caddy, since it is very user-friendly (it automatically handles ssl certificates via letsencrypt) and is very well documented.

Extract of docker compose file:

services:

	caddy:
		image: caddy:2
		container_name: caddy
		ports:
			- 80:80	# Needed for the ACME HTTP-01 challenge.
			- 443:443
		volumes:
			- ./caddy/Caddyfile:/etc/caddy/Caddyfile
			- ./caddy/caddy-config:/config
			- ./caddy/caddy-data:/data
		environment:
			EMAIL: ${EMAIL} # The email address to use for ACME registration.
		networks:
			caddy:
				ipv4_address: 172.90.0.100
		restart: always
	 
networks:
	caddy:
		external: true
  • I give the caddy server a static IP within the external caddy network, since it is sometimes necessary to know the IP address of the reverse proxy server within other containers, that rely on this reverse proxy server

Caddyfile #

The caddy file defines the setup of your webserver

  • You can specify how specific queries are handled
  • I mainly use it for reverse proxies (to forward the traffic)
    • I have some domains, which shall be accessible from the internet
    • You have to have a domain pointing to your IP address
      • Your router must have open ports 80 and 443 for your server
      • In your caddy file you have to specify that you want to automatically let caddy handle everything, example:
https://your-comain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	respond "Welcome to my website."
}
  • .
    • .
      • That is all, now you will receive an email, if the generation was successful
      • You can also check the docker logs, if errors occurred docker compose logs
      • This is most likely because your server is inaccessible from the internet (ports not open or domain not pointing to your server), check Network#Server Global Accessible again in this case
    • If you have domains, which shall only be accessible from the local network
      • You can use caddy to create self-signed certificates, like this:
https://pi4.lan:443 {
	tls internal
	respond "Welcome to my local only website."
}
  • .
    • .
      • Now you also have an encrypted connection to your website, which is only accessible from the local network
      • Note: This works only if you have a local dns resolver like #Pi-hole and Cloudflared, which resolves something like pi4.lan for you. If that is not the case, you can simply replace the domain within the caddy file, with the static ipv4 address of your server. This is also fine :)
  • I also have a simple caddy setup to server some static sites (also see #Hugo which is my static site generator)
    • You can put your static sites for example into the caddy data vault caddy/caddy-data/websites/your-files, which will be the following path within the caddy container /data/websites/your-files
https://website.my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	root * /data/websites/your-files
	file_server
}

On example how a caddy file could look like later, if you have multiple containers running that need to be accessible from the internet:

https://cloud.my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	rewrite /.well-known/carddav /remote.php/dav
	rewrite /.well-known/caldav /remote.php/dav

	reverse_proxy nextcloud:80
}

https://my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	respond "Welcome to my website"
}

https://website.my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	root * /data/websites/your-files
	file_server
}

https://php.lan:443 {
	tls internal
	reverse_proxy phpmyadmin:80
}

https://portainer.pi4.lan:443 {
	tls internal
	reverse_proxy portainer:9000
}

Reload caddy file

docker exec -ti caddy caddy reload --config /etc/caddy/Caddyfile
  • Not working within docker container, somehow…; I need to docker restart caddy to make changes visible

Caddy directives tags for caddyfile https://caddyserver.com/docs/caddyfile/directives

Logging

  • Caddy logging
  • Since caddy is used for almost all my services as a reverse proxy, I can simply log everything via cady
  • For all the caddyfiles, you can simply add a location where the logs shall be written to
my-domain.de {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	respond "Welcome to my website"
	
	log {
		output file /var/log/caddy/main.log {
		}
	}
}

Docker Notify Diun #

Checks for new versions on all containers on docker registry and get a notification by mail

services:
	diun:
		image: crazymax/diun:latest
		container_name: diun
		command: serve
		volumes:
			- "/var/run/docker.sock:/var/run/docker.sock"
			- "./diun/data:/data"
			- "./diun/diun.yml:/etc/diun/diun.yml"
		environment:
			- "TZ=Europe/Berlin"
			- "LOG_LEVEL=info"
			- "LOG_JSON=false"
			- "DIUN_WATCH_WORKERS=20"
			- "DIUN_WATCH_SCHEDULE=0 0 16 * * *"
			- "DIUN_PROVIDERS_DOCKER=true"
			- "DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=true"
		dns:
			- 1.1.1.1
		restart: always
		
networks:
	caddy:
		external: true

and config file in diun/data

notif:
	mail:
		host: smtp.strato.de
		port: 465
		ssl: true
		insecureSkipVerify: false
		username: webmaster@your-domain.de
		password: pw
		from: webmaster@your-domain.de
		to:
			- your-mail@provider.de
		templateTitle: "{{ .Entry.Image }} released for pi4"
		templateBody: |
			Docker tag {{ .Entry.Image }} which you subscribed to through {{ .Entry.Provider }} provider has been released.

Portainer #

You can use portainer to manage docker via GUI wiki:

services:
	
	portainer:
		image: portainer/portainer-ce:latest
		container_name: portainer
		restart: always
		#ports:
			#- 8000:8000 #necessary for agent, if connect from other device
			#- 9000:9000
		volumes:
			- /var/run/docker.sock:/var/run/docker.sock
			- ./portainer/portainer_data:/data
		networks:
			- caddy
		
networks:
	caddy:
		external: true

Netdata #

  • You can use Netdata to monitor your server
    • It is running a website with all the monitoring content on your server, which you can access via the browser from your PC or phone
  • installer is a single script, which will install the netdate repository and install the netdata package, it will also automatically start the webserver / website
  • It will run on http://your-server-ip:19999
  • You can also create a reverse proxy caddy file if you like to
    • I run netdata via this reverse proxy with the caddy network gateway to have acess to the localhost form inside the caddy container
https://monitor.lan:443 {
	tls internal
	reverse_proxy 172.90.0.1:19999
}

Add more charts:

  • Sensors for system
  • Check that you have sudo apt install lm-sensors
  • Then run sudo sensors-detect as explained in the sensor ubutu guide
  • Best restart afterwards to make sure that the new entries in sudo nano /etc/modules are loaded
  • Then check with sudo sensors
  • Also restart netdata sudo systemctl restart netdata.service

Add email notification for problems

  • sendmail needs to be set up: Linux Server#Sending Mails
  • Simply run /etc/netdata/edit-config health_alarm_notify.conf
  • And put in your mail at DEFAULT_RECIPIENT_EMAIL
    • Optionally specify to only get critical notifications your-mail|critical
  • Test via
# become user netdata
sudo su -s /bin/bash netdata

# send a test alarm
/usr/libexec/netdata/plugins.d/alarm-notify.sh test

High package drop ammount, due to fritzbox

  • Github netdata issue
  • Check via tcpdump -i enp3s0 ether proto 0x8912
    • Packages are dropped every 2 seconds and thus netdata warns about this
  • Disable it
  • Use nftables
    • Only do this if you have no other firewall running
      • e.g. do not use ufw in parallel
      • ufw status
    • Docker normaly adds own rules to nftables automatically after a restart
  • Add to nano /etc/nftables.conf
    • replace enp3s0 with your interface name
table netdev filter {
        chain enp3s0input {
                type filter hook ingress device enp3s0 priority 0;
                ether type 0X8912 counter drop # FRITZBOX Powerline
        }
}

table netdev filter {
        chain enp3s0input {
                type filter hook ingress device enp3s0 priority 0;
                ether type 0X88e1 counter drop # FRITZBOX Powerline
        }
}

I removed the previous default values, so my full config:

#!/usr/sbin/nft -f

flush ruleset

#table inet filter {
#       chain input {
#               type filter hook input priority 0;
#       }
#       chain forward {
#               type filter hook forward priority 0;
#       }
#       chain output {
#               type filter hook output priority 0;
#       }
#}

table netdev filter {
        chain enp3s0input {
                type filter hook ingress device enp3s0 priority 0;
                ether type 0X8912 counter drop # FRITZBOX Powerline
        }
}

table netdev filter {
        chain enp3s0input {
                type filter hook ingress device enp3s0 priority 0;
                ether type 0X88e1 counter drop # FRITZBOX Powerline
        }
}
  • enable nftables

  • systemctl status nftables.service

  • systemctl enable nftables.service

  • systemctl start nftables.service

  • I did a reboot, such that docker adds its own custom rules to nftables after the restart

  • Then check if everything is running and you can also check with nft list rules that docker added some own rules

Pi-hole and Cloudflared #

Pi-hole can be used as a local DNS service, which has benefits like encrypting your DNS queries via Cloudflare and blocking ads in the web

Docker compose file:

services:
 
	cloudflared:
		container_name: cloudflared
		image: visibilityspots/cloudflared:latest
		environment:
			- 'ADDRESS=::'
		dns:
			- 1.1.1.1
		networks:
			- dns
		restart: always
		
	pihole:
		container_name: pihole
		image: pihole/pihole:latest
		# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
		ports:
			- "53:53/tcp"
			- "53:53/udp"
			#- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
			#- "8080:80/tcp" #Proxy pass
		environment:
			TZ: 'Europe/Berlin'
			WEBPASSWORD: ${WEBPASSWORD}
			FTLCONF_LOCAL_IPV4: '<local ipv4 address of your RasPi>'
			PIHOLE_DNS_: 'cloudflared#5054'
			VIRTUAL_HOST: 'pihole.lan'
		# Volumes store your data between container upgrades
		volumes:
			- './pihole/etc-pihole:/etc/pihole'
			- './pihole/etc-dnsmasq.d:/etc/dnsmasq.d'		
		#	 https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
		#cap_add:
			#- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
		networks:
			- dns
			- caddy
		depends_on:
			- cloudflared
		restart: always
		
networks:
	dns:
	caddy:
		external: true
  • We have to open port 53, such that the DNS server can be accessed
  • Port 80 is not necessary, since we use caddy again to access the web view of our Pi-hole via reverse proxy Caddy file:
https://<local ipv4 address of your RasPi>:443 {
	tls internal
	reverse_proxy pihole:80
}
  • Now you can connect to the Pi-hole web interface
    • Check under settings and DNS that your upstream server is correctly set
      • We want to use cloudflared and cloudlfared#5054 should be automatically resolved to the IP address of the docker container
        • If there are problems, use a hard-coded IP address, which you have to specify in the docker compose file
    • Local DNS: If you would like to create domains, which should be resolved to IP addresses in your local network, you can specify those via this setting
      • I use it to be able to also access my local services via domains and this way I can also use caddy as a reverse proxy for these domains to forward the queries to the correct containers
      • So e.g. I add an entry like pi3.lan represents 192.168.0.203 (IP of my RasPi3) and so on
  • Now you also need to configure your devices to use your new Pi-hole as a DNS server, see Network#Change DNS for this
  • If your device is using your new Pi-hole as a DNS server, you can also replace the <local ipv4 address of your RasPi> with your new local DNS (e.g. pihole.lan), which you might have set up, in your Caddy file

Bitwarden / Vaultwarden #

Vaultwarden is Bitwarden for low resource servers like a RasPi. It can be used as a password manager

Docker compose script:

services:
	vaultwarden:
		image: vaultwarden/server:latest
		container_name: vaultwarden
		environment:
			DOMAIN: "https://vaultwarden.my-domain.de"
			WEBSOCKET_ENABLED: "true"	# Enable WebSocket notifications.
			SIGNUPS_ALLOWED: "false"
			SMTP_HOST: "smtp.strato.de"
			SMTP_FROM: "webmaster@my-domain.de"
			SMTP_FROM_Name: "Vaultwarden"
			SMTP_PORT: 465
			SMTP_SECURITY: "force_tls"
			SMTP_USERNAME: "webmaster@my-domain.de"
			SMTP_PASSWORD: ${SMTP_PASSWORD}
			#ADMIN_TOKEN: ${ADMIN_TOKEN}
		volumes:
			- ./vaultwarden/vw-data:/data
		networks:
			- caddy
		restart: always
		
networks:
	caddy:
		external: true

Caddy file:

https://vaultwarden.your-domain.de:443 {
	log {
		level INFO
		output file /data/access.log {
			roll_size 10MB
			roll_keep 10
		}
	}

	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	# This setting may have compatibility issues with some browsers
	# (e.g., attachment downloading on Firefox). Try disabling this
	# if you encounter issues.
	encode gzip

	# Notifications redirected to the WebSocket server
	reverse_proxy /notifications/hub vaultwarden:3012

	# Proxy everything else to Rocket
	reverse_proxy vaultwarden:80 {
			 # Send the true remote IP to Rocket, so that vaultwarden can put this in the
			 # log, so that fail2ban can ban the correct IP.
			 header_up X-Real-IP {remote_host}
	}
}

WireGuard #

Better use tailscale Linux Server#Tailscale VPN

	wireguard:
		image: linuxserver/wireguard:latest
		container_name: wireguard
		cap_add:
			- NET_ADMIN
			- SYS_MODULE
		environment:
			- PUID=0
			- PGID=0
			- TZ=Europe/Berlin
			- SERVERURL=vpn.my-domain.de #optional
			- SERVERPORT=51820 #optional
			- PEERS=smartphone pc another_device #optional
			#- PEERDNS=auto #optional
			#- ALLOWEDIPS=0.0.0.0/0 #optional
			- LOG_CONFS=false #optional
		volumes:
			- ./wireguard/config:/config
			- /lib/modules:/lib/modules
		ports:
			- 51820:51820/udp
		restart: always
  • For new peers change the peer variable; You can also delete the peer directories to force recreation

Ubuntu Connection #

Guide to connect via Ubuntu to Wireguard VPN

Symlink to fix dependencies:

ln -s /usr/bin/resolvectl /usr/local/bin/resolvconf

Then copy config to /opt/wireguard/ and start:

sudo wg-quick up /opt/wireguard/bergrunde.conf
sudo wg-quick down /opt/wireguard/bergrunde.conf

Nextcloud AIO #

Nextcloud All in One is a new way on how to easily install a whole Nextcloud instance with all recommended features like talk and office

  • Nextcloud AIO
  • Manual Nextcloud
  • I decided to also use this setup method now, which is also based on caddy and docker containers, but you do not have to do anything yourself for it now
  • Basically just follow the instructions on their github page - it is really very easy
  • Only one important info
    • If you use caddy as a container and not running in host mode as a reverse proxy like I do, take a look at my Caddy file entry below
    • You can not access the Nextcloud container via localhost in that case, but you have to specify the gateway of your caddy docker network, e.g. 172.90.0.1

Some additional hints

  • I use their docker compose file instead of the run command
    • I use my normal docker structure for the master container
    • /opt/docker/nextcloud-aio
    • However, I added that I want to direclty reverse proxy the nextcloud aio master container via my caddy server
    • I used some of their settings to be able to later use caddy to also reverse proxy the real Nextcloud container
      • The Apache port setting is important
    • Moreover, I added my external data dir
    • And adjusted some PHP settings
    • And added somme packages for some nextcloud apps I use like memories
volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer # This line is not allowed to be changed

services:
  nextcloud_aio_mastercontainer:
    image: nextcloud/all-in-one:beta # Must be changed to 'nextcloud/all-in-one:latest-arm64' when used with an arm64 CPU
    restart: always
    container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed
      - /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don't forget to also set 'DOCKER_SOCKET_PATH'!
    # ports:
    #   - 8080:8080
    environment: # Is needed when using any of the options below
      - APACHE_PORT=11000 # Is needed when running behind a reverse proxy. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      # - APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a reverse proxy that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      # - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora's Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
      # - DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default '/var/run/docker.sock'. Otherwise mastercontainer updates will fail.
      # - DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface.
      - NEXTCLOUD_DATADIR=/mnt/data/nextcloud # Allows to set the host directory for Nextcloud's datadir. See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
      # - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
      - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
      - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
      - NEXTCLOUD_MEMORY_LIMIT=4096M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
      # - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defiend-certification-authorities-ca
      # - NEXTCLOUD_STARTUP_APPS=deck tasks calendar contacts # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
      - NEXTCLOUD_ADDITIONAL_APKS=imagemagick ffmpeg perl # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-packets-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
      # - TALK_PORT=3478 # This allows to adjust the port that the talk container is using.
      # - SKIP_DOMAIN_VALIDATION=true
    networks:
      - caddy

networks:
  caddy:
    external: true
  • Caddy file for AIO
cloud-aio.lan {
	tls internal
	reverse_proxy nextcloud-aio-mastercontainer:8080 {
		transport http {
			tls_insecure_skip_verify
		}
	}
}
  • You also have to add the Caddy file entry for the real Nextcloud container running Apache
    • as mentioned earlier, since my caddy container is in an own network, I can not access localhost but access localhost via the caddy network gateway 172.90.0.1 for my caddy network
cloud.bergrunde.net {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	header Strict-Transport-Security max-age=31536000;
	
	reverse_proxy 172.90.0.1:11000
	
	log {
		output file /var/log/caddy/cloud.log {
		}
	}
	
}
  • Now you can simply follow their guide to start the master container
    • Enter your domain and start everything
  • I also managed to migrate from my own docker setup to this AIO setup via their migration guide
  • I also use their implemented Backup method now via borg for the Nextcloud backups
  • Since the setup is also based on docker, you can still apply all the knowledge from the next section
  • Only note, that the volumes are now located under
    • /var/lib/docker/volumes/
    • So the config.php for example is under /var/lib/docker/volumes/nextcloud_aio_nextcloud/_data/config/
    • And occ commands work via
      • docker exec --user www-data -it nextcloud-aio-nextcloud php occ

Reset brute force prevention sudo docker exec --user www-data -it nextcloud-aio-nextcloud php occ security:bruteforce:reset ipaddress

The Borg Backups

  • Open Backups
# Mount the archives to /tmp/borg
sudo mkdir -p /tmp/borg && sudo borg mount "/mnt/backup/backup-nextcloud-aio" /tmp/borg

# Access all archives in /tmp/borg
# E.g. you can open the file manager on that location by running:
xhost +si:localuser:root && sudo nautilus /tmp/borg

# When you are done
sudo umount /tmp/borg
  • Delete Backups
# List all archives
sudo borg list "/mnt/backup/backup-nextcloud-aio"

# An example backup archive might be called 20220223_174237-nextcloud-aio
# Then you can simply delete the archive with:
sudo borg delete --stats --progress "/mnt/backup/backup-nextcloud-aio::20220223_174237-nextcloud-aio"

# clean up the freed space
sudo borg compact "/mnt/backup/backup-nextcloud-aio"
  • Run inegrity check on AIO interface now

  • Sync backups

SOURCE_DIRECTORY="/mnt/backup/backup-nextcloud-aio"
TARGET_DIRECTORY="/mnt/backup-drive/borg"

touch "$SOURCE_DIRECTORY/aio-lockfile"

rsync --stats --archive --human-readable --delete "$SOURCE_DIRECTORY/" "$TARGET_DIRECTORY/"

rm "$SOURCE_DIRECTORY/aio-lockfile"
rm "$TARGET_DIRECTORY/aio-lockfile"

Nextcloud #

I use the Nextcloud AIO deployment method now - #Nextcloud AIO

Setup via Docker Compose #

An overview over main “components”:

  • Containers:
    • MariaDB as a database
    • Redis
    • Nextcloud main container
    • Nextcloud cron container
      • Used for sync jobs
  • Data folder
    • I use an external drive, which I mount at /mnt/data of my RasPi
    • This folder gets then mounted as a vault, to be used as the data folder for Nextcloud
  • config.php file for all configurations
  • Apache server which is auto started within container
  • Reverse proxy via caddy
  • Optional: phpmyadmin to check data base
services:
	
	mariadb:
		image: mariadb
		container_name: mariadb
		command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
		volumes:
			- ./mariadb/mysql:/var/lib/mysql
		environment:
			- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
			- MYSQL_PASSWORD=${MYSQL_PASSWORD}
			- MYSQL_DATABASE=nextcloud
			- MYSQL_USER=nextcloud
		networks:
			- nextcloud
			- db
		restart: always

	redis:
		image: redis
		container_name: redis
		command: redis-server --requirepass "${REDIS_HOST_PASSWORD}"
		volumes:
			- ./redis/data:/data
		networks:
			- nextcloud
		restart: always

	nextcloud:
		image: nextcloud
		container_name: nextcloud
		volumes:
			- ./nextcloud:/var/www/html
			- /mnt/data/nextcloud:/nextcloud_data
		environment:
			- MYSQL_HOST=mariadb
			- MYSQL_USER=nextcloud
			- MYSQL_PASSWORD=${MYSQL_PASSWORD}
			- MYSQL_DATABASE=nextcloud
			- REDIS_HOST=redis
			- REDIS_HOST_PASSWORD=${REDIS_HOST_PASSWORD}
			- NEXTCLOUD_ADMIN_USER=admin
			- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
			- NEXTCLOUD_TRUSTED_DOMAINS=cloud.your-domain.de
			- NEXTCLOUD_DATA_DIR=/nextcloud_data
			- PHP_MEMORY_LIMIT=4096M #Set according to your available RAM
		    - PHP_UPLOAD_LIMIT=4096M #Set according to your available RAM
		depends_on:
			- mariadb
			- redis
		networks:
			- nextcloud
			- caddy
		restart: always
		
	nextcloud_cron:
		image: nextcloud
		container_name: nextcloud_cron
		volumes:
			- ./nextcloud:/var/www/html
			- /mnt/data/nextcloud:/nextcloud_data
		environment:
			- PHP_MEMORY_LIMIT=4096M #Set according to your available RAM
		entrypoint: /cron.sh
		depends_on:
			- mariadb
			- redis
		networks:
			- nextcloud
		restart: always
		
	phpmyadmin:
		image: phpmyadmin
		container_name: phpmyadmin
		restart: always
		#ports:
			#- 8081:80
		networks:
			 - caddy
			 - db
		environment:
			- PMA_HOST=mariadb
 
networks:
	caddy:
		external: true
	nextcloud:
	db:

And set up your caddy file:

https://cloud.your-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	header Strict-Transport-Security max-age=31536000;

	rewrite /.well-known/carddav /remote.php/dav
	rewrite /.well-known/caldav /remote.php/dav

	reverse_proxy nextcloud:80
}

https://php.lan:443 {
	tls internal
	reverse_proxy phpmyadmin:80
}

Post setup #

  • User of your data folder needs to be www-data:www-data and also no read/write access for others
  • Once the containers are running, you can try to visit the website and log in with your admin account
  • Note: All configuration can be done via the admin webinterface or via the nextcloudvault/config/config.php file. Also read the Nextcloud manual!
  • If the containers are started for the first time, the config.php file will be created, containing the settings, you specified in the environment variables of your Nextcloud container
  • After that, the environment variables will never overwrite the config.php file again, so any configuration cannot be done via the environment variables anymore
  • fix reverse proxy warning by adding to the config:
'trusted_proxies' => 
	array (
		0 => '<ip-of-your-proxy-container>',
	),
  • setup mail account via admin panel or config
  • Check that this is set to your domain:
'overwrite.cli.url' => 'https://cloud.your-domain.de',
  • Inster default phone region
'default_phone_region' => 'DE',
  • Check in admin settings, that cron job is selected
  • Install apps tasks, calendar, contacts, and everything else you want

OCC Commands #

https://docs.nextcloud.com/server/20/admin_manual/configuration_server/occ_command.html

# Shell
docker exec -u 33 -it nextcloud bash

# Scan
docker exec -u 33 -it nextcloud php occ files:scan --all --verbose
docker exec -u 33 -it nextcloud php occ files:scan --path="/User/files/Music" --verbose

# Maintenance
docker exec -u 33 -it nextcloud php occ maintenance:mode --on
docker exec -u 33 -it nextcloud php occ maintenance:mode --off

Other #

Mysql cheat sheet: https://www.mysqltutorial.org/mysql-cheat-sheet.aspx

Only Office Nextcloud #

Only Office can be used to edit documents online in Nextcloud

Turn Server #

A Turn server can be used to enable Nextcloud talk even if participant is behind a strict firewall. We use coturn as open source turn server

Coturn.conf

listening-port=3478
fingerprint
use-auth-secret
static-auth-secret=<COTURN_SECRET>
realm=your-domain.de
total-quota=0
bps-capacity=0
stale-nonce
no-multicast-peers

Docker compose file

services:
  coturn:
    image: coturn/coturn
    container_name: coturn
    volumes:
      - ./coturn.conf:/etc/coturn/turnserver.conf
    environment:
      - COTURN_SECRET=${COTURN_SECRET}
    ports:
      - 3478:3478
    networks:
      - caddy
    restart: always
    
 
networks:
  caddy:
    external: true
  • I added coturn to the caddy network, since it might be necessary that the coturn server is in the same network as the Nextcloud instance, but I am not sure
    • Otherwise one would have to use this --external-ip flag somehow of coturn
  • This should be it, now you can use your turn server at
    • your-domain.de:3478
    • Note that the port must be open

Wiki.js #

Wiki that can be selfhosted wiki js

	postgres:
		image: postgres:11-alpine
		container_name: postgres
		environment:
			POSTGRES_DB: wiki
			POSTGRES_PASSWORD: ${DB_PASS}
			POSTGRES_USER: wikijs
		logging:
			driver: "none"
		networks:
			- wiki
		volumes:
			- ./postgres:/var/lib/postgresql/data
		restart: always

	wiki:
		image: requarks/wiki:2
		container_name: wiki
		depends_on:
			- postgres
		environment:
			DB_TYPE: postgres
			DB_HOST: postgres
			DB_PORT: 5432
			DB_USER: wikijs
			DB_PASS: ${DB_PASS}
			DB_NAME: wiki
		#ports:
		#	- "8089:3000"
		networks:
			- wiki
			- caddy
		restart: always
 
networks:
	caddy:
		external: true
	wiki:

Caddyfile:

https://wiki.my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}

	reverse_proxy wiki:3000
}

Syncthing #

Can be used to sync stuff

syncthing:
		image: linuxserver/syncthing:latest
		container_name: syncthing
		hostname: pi4-syncthing
		environment:
			- PUID=0
			- PGID=0
			- TZ=Europe/Berlin
		volumes:
			- /opt/syncthing:/config
			- /opt/test:/data1
		ports:
			#- 8384:8384
			- 22000:22000/tcp
			- 22000:22000/udp
			- 21027:21027/udp
		networks:
			- caddy
		restart: always

Hugo #

A tool to create static websites

Note: Hugo has an integrated webserver, which can be used to immediately view the files during development - we use a local domain address (wiki.lan) (see #Pi-hole and Cloudflared) to reverse proxy it via caddy. And then you can generate html code and publish it via your real webserver - we use caddy to directly publish the sites via a global domain (your-domain.de).

Docker compose file:

services:
	hugo:
		image: klakegg/hugo:ext-ubuntu
		container_name: hugo
		command: server --appendPort "false" -b "https://wiki.lan" -p 443
		volumes:
			- "./hugo:/src"
		networks:
			- caddy
		restart: always

networks:
	caddy:
		external: true

Caddy file - local:

https://wiki.lan:443 {
	tls internal
	reverse_proxy hugo:443
}
  • the server will fail to start and the container will crash, if there is not a config.toml located inside the /src folder
    • consequently add the config.toml file with the following content to your volume ./hugo before starting the container
baseURL = 'https://your-domain.de/'
languageCode = 'en-us'
title = 'My New Hugo Site'
  • Now you can connect to the shell of the docker container
    • docker exec -ti hugo bash
  • From now on you can use all the hugo commands as described in the hugo guide
    • If you go with the quickstart guide, you can create the quickstart folder within the /src folder and afterwards move all content one layer higher (and overwrite the original config.toml) and remove the empty quickstart folder afterwards
    • Skip the step with enabling the server, since it is always running anyways and watching this folder for updates
    • After a theme is inserted, you can visit your website and see the results the first time
  • Note: The default hugo server (with pid 1) must always run within the container, which is started with the initial command in the docker compose file
    • If you stop the server, the container crashes
    • You can change the flags for this server in the docker compose file
    • With the current setup it is listening on port 443 and by default is always watching the /src folder for changes
    • I had problem if not using the same port as you will use when reverse proxying this server via caddy, since internal links will brake then
      • so I directly use port 443, which is ok, since caddy can directly forward to this port
      • The port is not published to the host network and only containers within the caddy network can access the hugo server via 443

Now we also want to publish the content to the real server:

  • Simply call the hugo command within your docker container
    • First to a delete of the old files, if there are some: docker exec hugo rm -rf public
    • docker exec hugo hugo
  • This will create the public/ folder with the static website
    • Check docker exec hugo ls -la
  • Now you can paste the public folder to your webserver data folder (see here how to server static files) and serve it Caddyfile:
https://my-domain.de:443 {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	root * /data/websites/public
	file_server
}

Hugo Guide #

This video guide about Hugo is very useful to understand the main concepts of the different directories located within the Hugo folder and how templates and themes work

  • Most useful were the videos 5, 11, 12 and 13

Hugo Theme: Book #

You can use the Hugo theme book for your website

  • Follow the installation instructions
  • Note, that there is currently a bug reported for this image
    • You might have to delete two files to make the theme work

Obsidian to Hugo #

  • I want to export my Obsidian vault to Hugo
  • You can use this script here for obsidian-to-hugo conversion
  • Basically:
    • pip3 install obsidian-to-hugo
    • python3 -m obsidian_to_hugo --obsidian-vault-dir=<path> --hugo-content-dir=<path>
Automatic Hugo Update #

I want to have my Obsidian vault locally and if I make changes, they should automatically be published in my Wiki

  • I use a git repository to accomplish this
    • The git repo will hold the complete hugo folder
Local Side #
  • My folder containing the vault, that shall be published is called public/
  • I add another folder named public-repo next to it
  • In the public-repo folder, I have some scripts

ini-repo.sh

  • This script basically initializes my repo
    • You have to manually set up this repo, such that your complete hugo folder will be inside the repo
#!/bin/bash
git clone --recursive git@gitlab.com:frederikb96/wiki.git ./local.wiki
echo "done"
read

update-repo.sh

  • This script automatically pulls and simply commits all new changes that I make in the local repo
#!/bin/bash
cd local.wiki
git pull --no-rebase
git stage --all
git commit -m 'Auto update'
git push
echo "done"

convert-content.sh

  • This script, uses the #Obsidian to Hugo method to convert my vault content and add it to the content folder of the hugo repo
#!/bin/bash
python3 -m obsidian_to_hugo --obsidian-vault-dir=../public --hugo-content-dir=local.wiki/content

all.sh

  • This script simply calls the convert and afterwards the update script
#!/bin/bash
./update-repo.sh
./convert-content.sh
./update-repo.sh
echo "all done"

This way all your local vault content is always up2date with the git repo, you simply have to call the all.sh script every time you make a change, that shall be visible in the git repo

Server Side #

Now we have to always be able to get updates on the server side

  • Go to your docker hugo folder and initalize the hugo vault as your hugo git repository (basically remove all content and replace it with the git repo, which has to contain the same content)
    • Basically: rm -rf hugo and git clone --recursive https://gitlab.com/frederikb96/wiki.git ./hugo
    • Note: If your repository is public, you can pull via the “https” link, such that you do not have to add your git ssh key to your server, which is enough if you only want to pull and not commit anything via the server side
  • Now you can always use a simple script to update your hugo repo on the server
    • In my setup, the script is located next do the docker-compose.yml file and is updating the hugo vault/repo
    • Note: It is only used to pull and not commit changes, so all local changes are always overwritten!
  • We also want to update the real website and not only the local website
    • So we also automate the publishing process
    • And move the directory to the location where caddy is serving the new files

update-repo.sh

#!/bin/bash
cd hugo
git reset --hard
git pull --no-rebase
echo "done"

all.sh

#!/bin/bash
./update-repo.sh
rm -rf hugo/public
docker exec hugo hugo && rsync -av --delete hugo/public/ /opt/docker/caddy/caddy/caddy-data/websites/hugo-wiki/ && rm -rf hugo/public
echo "all done"
  • You can also setup a crontab job to automatically update your git repo every hour or so:
  • crontab -e
  • 0 * * * * (cd /opt/docker/hugo && ./all.sh)
Both Sides #

If you want to make changes directly visible, you can simply combine both previous sides

  • You can also create launchers for this
[Desktop Entry]
Version=1.1
Type=Application
Name=Wiki Publish
Icon=
Exec=gnome-terminal -- bash -c 'cd /home/freddy/Nextcloud/Notes/Technical/public-repo && ./all.sh && ssh root@pi4.lan "(cd /opt/docker/hugo && ./all.sh)"; read'
Terminal=false

XWiki #

https://www.xwiki.org/xwiki/bin/view/Main/WebHome

Docker compose file:

services:
		
	xwiki:
		image: "arm64v8/xwiki"
		container_name: xwiki
		depends_on:
			- xwiki-db
		ports:
			- "8080:8080"
		environment:
			- DB_USER=xwiki
			- DB_PASSWORD=${MYSQL_PASSWORD}
			- DB_HOST=xwiki-db
		volumes:
			- ./xwiki:/usr/local/xwiki
		networks:
			- xwiki-db
			- caddy
			
	xwiki-db:
		image: "arm64v8/mysql"
		container_name: xwiki-db
		volumes:
			- ./db/xwiki.cnf:/etc/mysql/conf.d/xwiki.cnf
			- ./db/data:/var/lib/mysql
			- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
		environment:
			- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
			- MYSQL_USER=xwiki
			- MYSQL_PASSWORD=${MYSQL_PASSWORD}
			- MYSQL_DATABASE=xwiki
		command: --character-set-server=utf8mb4 --collation-server=utf8mb4_bin --explicit-defaults-for-timestamp=1 --default-authentication-plugin=mysql_native_password
		networks:
			- xwiki-db
 
networks:
	caddy:
		external: true
	xwiki-db:

Matrix #

Config

  • First you need a homeserver.yaml config file, so generate it
    • This will create the config in the synpase folder (you have to give an absolute path) and auto-deletes the container again docker run -it --rm -v /absoulte-path/synapse:/data -e SYNAPSE_SERVER_NAME=my-domain.de -e SYNAPSE_REPORT_STATS=yes matrixdotorg/synapse:latest generate
  • Manually add database info to the generated homeserver.yaml file
database:
	name: psycopg2
	args:
		user: synapse
		password: <pass>
		database: synapse
		host: postgres-synapse
		cp_min: 5
		cp_max: 10

Compose file

services:

	postgres-synapse:
		image: postgres:15
		container_name: postgres-synapse
		environment:
			POSTGRES_DB: synapse
			POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
			POSTGRES_USER: synapse
			POSTGRES_INITDB_ARGS: "--encoding='UTF8' --locale='C'"
		networks:
			- synapse-db
		volumes:
			- ./postgres-15:/var/lib/postgresql/data
		restart: always
		
	synapse:
		image: "matrixdotorg/synapse:latest"
		container_name: synapse
		#ports:
			#- "8008:8008"
		environment:
			- TZ=ECT
		volumes:
			- ./synapse:/data
		networks:
			- caddy
		restart: always
 
networks:
	caddy:
		external: true
	synapse-db:

Federation and Delegation #

We will also enable federation and delegation

  • Federation means, that our server can be accessed from other synapse servers
  • Delegation means, that we will be accessible at my-domain.de but our server is actually running on matrix.my-domain.de
  • To make it work, we use the following caddyfile as explained in: guide for reverse proxy
    • So, we have to add some information to our main server proxy too
    • This is an example, how I added the additional responses to my main domain, and how to server all other requests via my hugo sever, which is normally running on my main domain
    • We also have another local domain, which can be used to access the admin API on /_synapse/admin/...
my-domain.de {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	header /.well-known/matrix/* Content-Type application/json
		header /.well-known/matrix/* Access-Control-Allow-Origin *
		respond /.well-known/matrix/server `{"m.server": "matrix.my-domain.de:443"}`
		respond /.well-known/matrix/client `{"m.homeserver":{"base_url":"https://matrix.my-domain.de"}}`
	
	root * /data/websites/hugo-main
	file_server
}

matrix.my-domain.de {
	# Use the ACME HTTP-01 challenge to get a cert for the configured domain.
	tls {$EMAIL}
	
	reverse_proxy /_matrix/* synapse:8008
		reverse_proxy /_synapse/client/* synapse:8008
}

matrix.lan {
	tls internal
	reverse_proxy synapse:8008
}
  • Also add another line to your homeserver.yaml
public_baseurl: "https://matrix.my-domain.de"

Mail #

Set up email account for synapse server in homeserver.yaml

email:
	smtp_host: smtp.strato.de
	smtp_port: 465
	notif_from: "%(app)s <webmaster@my-domain.de>"
	app_name: "Matrix My-Name"
	smtp_user: webmaster@my-domain.de
	smtp_pass: smtp_pw
	force_tls: true

Admin #

Generate an admin account

  • Help message
docker exec -it synapse register_new_matrix_user https://matrix.my-domain.de -c /data/homeserver.yaml --help
  • New admin user docker exec -it synapse register_new_matrix_user https://matrix.my-domain.de -c /data/homeserver.yaml -u admin -p pw -a

Use Admin API

New Users #

Registration for new users

  • I require a mail and a private token which I can generate for new users, to be able to register on my synapse server
  • in homeserver.yaml
enable_registration: true
registration_requires_token: true
registrations_require_3pid:
	- email

So, you also need to generate those new tokens

  • Read all tokens
    • curl --insecure --header "Authorization: Bearer <your access token>" -X GET https://matrix.lan/_synapse/admin/v2/users/@freddy:bergrunde.net
  • Create a new default infinite one
    • curl --insecure --header "Authorization: Bearer <your access token>" -X POST https://matrix.lan/_synapse/admin/v1/registration_tokens/new -d {}
  • Create a limited one
    • curl --insecure --header "Authorization: Bearer <your access token>" -X DELETE https://matrix.lan/_synapse/admin/v1/registration_tokens/gxq0POST https://matrix.lan/_synapse/admin/v1/registration_tokens/new -d '{"uses_allowed": 10}'
  • Delete a token
    • curl --insecure --header "Authorization: Bearer <your access token>" -X DELETE https://matrix.lan/_synapse/admin/v1/registration_tokens/token-id

PostgreSQL Upgrade #

  • PostgreSQL cannot automatically upgrade major versions like from 14 to 15

  • So you have to do the following, as described in this upgrade guide

  • cd into the synapse directory with the docker compose file

  • Stop the synapse container

    • docker stop synapse
  • Dump the PostgreSQL database

    • docker exec postgres-synapse pg_dumpall -U synapse > dump.sql
  • Stop the PostgreSQL container

    • docker stop postgres-synapse
  • Edit the docker compose file

    • nano docker-compose.yml
    • change the version of PostgreSQL in the docker image and also the mounted folder for the data volume
    • image: postgres:15 and - ./postgres-15:/var/lib/postgresql/data
  • Start the PostgreSQL container

    • docker compose up -d postgres-synapse
  • Import new data

    • docker exec -i postgres-synapse psql -U synapse < dump.sql
  • Start everything again

    • docker compose up -d
  • Check if it is working and delete old folder and dump file