Getting up and running: Mastodon on Fedora Server
I had a bit of bother getting Mastodon up and running on a Fedora box so figured I’d write up the main sticking points.
Preamble
The scope is to:
- Put most of this inside Docker
- Use AWS S3 for object storage
- Use AWS SES for mail
If you’ve got Postfix already running locally, or don’t want to run storage on S3, just ignore these bits.
Basics
AWS setup
AWS is used for email notifications1 and object storage.
Essentially, we want to:
- Create a bucket for S3
- Create an IAM user for S3
- Setup mail / DKIM and get our SMTP creds
-
Create your bucket via S3. Ensure Bucket ACLs are permitted (Permissions > Object Ownership > Edit > ACLs enabled).
-
Create a user in IAM with the following inline policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::BUCKET_NAME" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject" ], "Resource": "arn:aws:s3:::BUCKET_NAME/*" } ] }
-
Then set up SES for mail delivery, making sure your verified identity matches whatever domain you’re putting your Mastodon instance onto. Plenty guides re how to do this, I’m not going to repeat it here.
Box setup
You’ll need to first make sure firewalld is using iptables rather
than nftables; Docker still doesn’t behave nicely with nftables.
In /etc/firewalld/firewalld.conf
, you will want to make sure
you’re using iptables:
# FirewallBackend
# Selects the firewall backend implementation.
# Choices are:
# - nftables (default)
# - iptables (iptables, ip6tables, ebtables and ipset)
# Note: The iptables backend is deprecated. It will be removed in a future
# release.
FirewallBackend=iptables
systemctl restart firewalld
The box will need Docker, git, redis, the usual:
yum -y install dnf-plugins-core
yum config-manager --add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
yum install git docker-ce docker-ce-cli containerd.io \
docker-compose-plugin nginx redis
Enable and start Docker, we’re going to need it soon:
systemctl enable docker
systemctl start $_
Also make sure Redis is contactable by Docker instances;
in /etc/redis/redis.conf
:
bind 127.0.0.1 172.17.0.1 -::1
protected-mode no
Enable and start Redis:
systemctl enable redis
systemctl start $_
Now get a user set up and drop into its shell:
useradd mastodon
usermod -aG docker mastodon
su - mastodon
# check we have access to docker, if not go back and fix this
docker ps # should not return an error
git clone https://github.com/mastodon/mastodon.git ./server
Mastodon uses an .env
file to configure everything. You can
either copy the template (.env.production.sample
), or
use the one below (saving it to .env.production
).
Before we create the .env.production
file we need to generate
some secrets.
Generating VAPID secrets
The following will output a keypair to your cwd:
$ mkdir scratch && cd $_
$ python -mvenv .
$ source bin/activate
(scratch) $ pip install py-vapid
(scratch) $ ./bin/vapid --gen
Then generate a couple of secrets:
$ openssl rand -hex 64 > secret1
$ openssl rand -hex 64 > secret2
Finally generate a sensible DB password:
openssl rand -base64 30 | tr -d "=+/" | cut -c-16 > pg-password
With these values in hand, we can create our .env.production
:
# Federation
# This identifies your server and cannot be changed safely later
LOCAL_DOMAIN=YOUR_DOMAIN
# Redis
REDIS_HOST=172.17.0.1
REDIS_PORT=6379
# PostgreSQL
DB_HOST=db
DB_USER=mastodon
DB_NAME=mastodon_production
DB_PASS=PG_PASSWORD
DB_PORT=5432
# Secrets
SECRET_KEY_BASE=CONTENTS_OF_SECRET1
OTP_SECRET=CONTENTS_OF_SECRET2
# Web Push
VAPID_PRIVATE_KEY=CONTENTS_OF_PUBKEY
VAPID_PUBLIC_KEY=CONTENTS_OF_PRIVKEY
# Sending mail
SMTP_SERVER=email-smtp.eu-west-2.amazonaws.com
SMTP_PORT=587
SMTP_LOGIN=YOUR_AWS_ACCESS_KEY
SMTP_PASSWORD=YOUR_AWS_SMTP_PASSWOD
SMTP_FROM_ADDRESS=noreply@example.org
# File storage
S3_ENABLED=true
S3_BUCKET=example-files
AWS_ACCESS_KEY_ID=ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=SECRET_ACCESS_KEY
## see S3 guide for following directive:
## https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html
S3_ALIAS_HOST=files.example.org
# change as need be
S3_REGION=eu-west-2
# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952
## translation image
LIBRE_TRANSLATE_ENDPOINT=http://translate:5000
Set up a test stack so we can start poking around and check everything is happy.
In ./docker-test.yml
, something like the following:
version: '3'
services:
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- ./postgres14:/var/lib/postgresql/data
environment:
- 'POSTGRES_HOST_AUTH_METHOD=trust'
expose:
- 5432
web:
build: .
image: tootsuite/mastodon
restart: always
env_file: .env.production
command: bash -c "tail -f /dev/null"
networks:
- external_network
- internal_network
ports:
- '127.0.0.1:3000:3000'
volumes:
- ./public/system:/mastodon/public/system
networks:
internal_network:
internal: true
external_network:
Now build and run:
docker-compose -f docker-setup.yml build
docker-compose -f docker-setup.yml run
In a separate console we should be able to connect to the web instance and see that it’s able to connect to postgres:
docker exec -it mastodon_web_1 /bin/bash
## see if we're able to route to host:
cat </dev/tcp/db/5432 # control-c to exit
If no errors have been returned, you’re able to connect to the database OK, so can now run:
bundle exec rails db:setup
This should have exited with no errors. Assuming this is the case we can bring down our stack and setup the other nodes.
In ./docker-compose.yml
, put the following:
version: '3'
services:
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- ./postgres14:/var/lib/postgresql/data
environment:
- 'POSTGRES_HOST_AUTH_METHOD=trust'
web:
build: .
image: tootsuite/mastodon
restart: always
env_file: .env.production
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
networks:
- external_network
- internal_network
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
ports:
- '127.0.0.1:3000:3000'
environment:
VIRTUAL_HOST: "example.org"
depends_on:
- db
volumes:
- ./public/system:/mastodon/public/system
translate:
image: libretranslate/libretranslate
restart: always
networks:
- internal_network
ports:
- '127.0.0.1:5000:5000'
streaming:
build: .
image: tootsuite/mastodon
restart: always
env_file: .env.production
command: node ./streaming
networks:
- external_network
- internal_network
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
ports:
- '127.0.0.1:4000:4000'
depends_on:
- db
sidekiq:
build: .
image: tootsuite/mastodon
restart: always
env_file: .env.production
command: bundle exec sidekiq
depends_on:
- db
networks:
- external_network
- internal_network
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
networks:
external_network:
internal_network:
internal: true
Now we can re-run our build and - fingers crossed - bring up the instance.
Check everything is up and running by connecting to the web instance and trying to query eg the libretranslate container:
$ docker exec -it server_web_1 wget "http://translate:5000/translate" --post-data='q=hello&source=en&target=es' -q -O - | more
{"translatedText":"hola"}
Looking good!
Reverse proxy
The box I was putting this onto had some other hosts on it as well, so YMMV; some folks choose to put ONLY Mastodon onto a particular box.
To keep things simple, let’s just pull out the assets and packs and serve them via the host:
cd /home/mastodon/server/public
docker cp server_web_1:/opt/mastodon/public/assets/ .
docker cp server_web_1:/opt/mastodon/public/packs/ .
Then adapt the provided nginx config, to serve via Docker host:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend {
server 127.0.0.1:3000 fail_timeout=0;
}
upstream streaming {
server 127.0.0.1:4000 fail_timeout=0;
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g;
server {
listen 80;
listen [::]:80;
server_name example.org;
root /home/mastodon/mastodon/public;
location /.well-known/acme-challenge/ { allow all; }
location / { return 301 https://$host$request_uri; }
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.org;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
## the following params are set in global nginx.conf as shipped
## by fedora maintainers so are not required here
#ssl_session_tickets off;
#ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
#ssl_prefer_server_ciphers on;
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
keepalive_timeout 70;
sendfile on;
client_max_body_size 80m;
root /home/mastodon/server/public;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml image/x-icon;
location / {
try_files $uri @proxy;
}
# If Docker is used for deployment and Rails serves static files,
# then needed must replace line `try_files $uri =404;` with `try_files $uri @proxy;`.
location = sw.js {
add_header Cache-Control "public, max-age=604800, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/assets/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/avatars/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/emoji/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/headers/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/packs/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/shortcuts/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/sounds/ {
add_header Cache-Control "public, max-age=2419200, must-revalidate";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ~ ^/system/ {
add_header Cache-Control "public, max-age=2419200, immutable";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
try_files $uri =404;
}
location ^~ /api/v1/streaming/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Proxy "";
proxy_pass http://streaming;
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
tcp_nodelay on;
}
location @proxy {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Proxy "";
proxy_pass_header Server;
proxy_pass http://backend;
proxy_buffering on;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache CACHE;
proxy_cache_valid 200 7d;
proxy_cache_valid 410 24h;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cached $upstream_cache_status;
tcp_nodelay on;
}
error_page 404 500 501 502 503 504 /500.html;
}
and … er … that’s it.
Comments welcome via my profile.
-
Once upon a time when I ran my own mail servers I’d have been less inclined to outsource this bit, but I’m long past the point of caring about delivery status notifications these days. Life is too short. ↩︎