Administrating my GoToSocial instance - monitoring and backup
As with the previous post about GoToSocial, you should not take what I write here as gospel. It is possible that I am doing something not in the best way. One of the reasons I write those posts, apart from the need to share what I learnt, is the hope that someone more knowledgeable than I am will pinpoint my mistakes and provide useful feedback.
My GoToSocial instance has been running for more than a month now, and now as I have gathered some experience in administrating it, I can share it with you, dear readers.
fedi.stfn.pl has been running without issues so far. I did not lose any data, and the number of instances federated with it is constantly growing. The bots are doing what they are told, and I am occasionally using it to toot updates, mostly meta toots on how it’s doing. Ah, the Fediverse trend of tooting mostly about the Fediverse :)
The number of instances with whom I am federating is growing.
In the previous blog post I talked about the setup and deployment part of a GoToSocial instance, here I want to talk about the day-to-day monitoring and backup chores.
Resources Usage
My instance does not see much traffic, there are four active accounts there, three being bots posting once a day, and my personal account, to which I post once every few days. There are occasional spikes of traffic when larger accounts retoot my bots.
I’m running it on a VPS with 2 vCPU and 3GB of RAM. The RAM usage is stable around 600MB, with spikes up to 800MB. The typical load is around 0.1, with spikes rarely passing 1.0. During that month I used 10GB of transfer, including typical Ubuntu package updates. GTS media folder uses around 500MB of disk space. I can confirm what the GTS docs state, that with light traffic, GTS will run on the smallest of VPSs, or a Raspberry Pi, if you can expose your local SBC to the Internet.
Monitoring
For the basic, “is my site up” monitoring I am using healthchecks.io. They provide free email and WhatsApp alerts when the site is not responding. And a nice badge to put somewhere.
For detailed information on how the instance is doing I am using a combination of Prometheus, Node Exporter and Nginx Exporter.
I have all three running as systemd services. Prometheus scrapes itself, Node Exporter and Nginx Exporter and exposes the metrics. To reach the metrics from outside, UFW is configured to expose port 9090, but only over the Tailscale VPN interface. The metrics are presented in the form of graphs in Grafana running in my new, shiny k3s cluster. More on that cluster in a future blog post.
Part of the /opt/prometheus/prometheus.yml
config file defining the metrics collection.
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
- job_name: node
static_configs:
- targets: ['localhost:9100']
- job_name: nginx
static_configs:
- targets: ['localhost:9113']
And the settings for the UFW firewall, showing how Prometheus is only available through the VPN.
>>> sudo ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] OpenSSH ALLOW IN Anywhere
[ 2] 443 ALLOW IN Anywhere
[ 3] 80/tcp ALLOW IN Anywhere
[ 4] 9090 on tailscale0 ALLOW IN Anywhere
[ 5] OpenSSH (v6) ALLOW IN Anywhere (v6)
[ 6] 443 (v6) ALLOW IN Anywhere (v6)
[ 7] 80/tcp (v6) ALLOW IN Anywhere (v6)
[ 8] 9090 (v6) on tailscale0 ALLOW IN Anywhere (v6)
Backup
Backup is done using Borgmatic. Borgmatic is a tool to run Borg on schedule. Borg is a tool to make incremental backups. Incremental meaning that the first backup backs up (wow, great language) the full data, and the subsequent ones only record the information about the changes, saving disk space.
I configured Borgmatic according to the docs. For now, I am only backing up locally on the same VPS, but I have plans to set up automatic remote backups to my Hetzner Storage Box. I just hope I’ll make myself do it before I really need it :)
Martin created an interesting alternative approach to GTS backup. He made a bash script that handles the backup process in a smart way. I recommend taking a look at the repo, and listening to the podcast in which he discusses his approach.
Who is on my instance?
stfn - my account for tooting about the instance
astrobin_iotd - a bot that every day toots the Astrobin Image of the Day
stacjadnia - a bot that every day toots a link to a random railway station in Poland
naszpapa - a bot that every day at 21:37 toots an emoji with the face of the late pope John Paul II (that is a Polish inside joke, don’t ask).
Bottom Line
I am very much enjoying having my own instance in the Fediverse, I feel that I have reached another level in my online presence. It’s a learning opportunity, and also a way to have your own part of the Fediverse. Even if my the instances of my main accounts go down (I do hope it won’t happen, those are great instances with very nice admins), I will have a place to go back to. And I have a place for testing, for investigating, for my silly bots.
I even might consider inviting other people to use my instance, but only if they acknowledge that I am actually a noob in “cosplaying as a sysadmin” ((c) Jeff Geerling) and they won’t mind that there is a non-zero possibility that I do something stupid and lose all their data.
BTW, a few days ago the admin of the late botsin.space instance released a “postmortem” , writing in detail about his experience in running a large Mastodon instance. I found it very interesting to read, and telling a lot about the state of Mastodon and the Fediverse in general. Reading it made me even more thankful for the creation of GoToSocial, a simpler and lighter than Mastodon way to have your own instance.
Thanks for reading! If I find something worth mentioning about my GTS instance, there will be another blog post in this series.
If you are planning to host your own instance, there’s still my affiliate link for a cheap VPS on RackNerd. If you use this link and buy a server, I will get a tiny commission.
If you enjoyed this post, please consider helping me make new projects (and pay for the servers!) by supporting me on the following crowdfunding sites: