17 Jun 2025
Hosting my side projects the easy way
Choosing boring technology
I’m writing a bunch of software for myself as a hobby. Most of this software requires a server to function, be it to store data or to serve a web application. I’ve gone through a couple of iterations on my hosting setup over the years and finally, I’ve arrived at one I’m quite content with. In this note I will take you down my path of simplification.
Few dependencies
Software like mine has no special requirements, many have built similar software before and many will after. There are dozens of ways to deploy and host backend servers, databases and static web applications. Many of those ways try to abstract complexity away from you, pushing it down into an infrastructure layer. That sounds appealing, but if you give in to the temptation, you’re dependent on that machinery. I like to avoid that and make sure I control the whole stack. This doesn’t have to be as hard as cloud vendors make it sound. They have an interest in making you believe you can’t administer a database by yourself after all.
To get more specific, I don’t like platforms as a service like Vercel. They promise to get you started quickly, but in the process you lose flexibility, independence and your money. Instead, I’m opting for a virtual private server, a bare linux box if you will. It represents the foundational unit to run software on. You can spin up a VPS wherever you want, even on your own hardware if you want, there is no lock-in.
Further, I’ve largely stopped using auxiliary services to enable deployments, backups or whatever else you may want to do. Again, there are platforms that promise you a seamless experience, for just a little bit of your hard-earned money (or data). But I’ve found that for my purposes I can do without.
Architecture
My setup is entirely dockerized. I run Caddy as my central web server that fulfils multiple roles: TLS termination, static site serving and reverse-proxying. All my frontend-only projects are simply bundled and chucked into a folder for Caddy to serve. All my backends are dockerized and spun up alongside Caddy, which proxies requests appropriately. All my domains have automatic SSL certificates issued by Let’s Encrypt, all managed by Caddy.
It’s quite simple, really.
Deployment
The only real question is how my software ends up there. Previously, I was manually copying compose files,
.env
files etc. onto the server and ssh
-ing in to issue docker commands. But I’ve found a better way,
enabled by docker contexts.
Let me bring you up to speed in case you’re unfamiliar. The docker
cli tool we use to interact with
containers is not doing most of the work itself, it’s instead sending commands to the docker daemon
running in the background. Now, with docker contexts we can instruct docker to send commands not to
the local daemon, but instead a remote one, reachable via ssh
for instance.
So for my purposes, I’ve set up a docker context that points to my VPS via ssh
. This is entirely seamless,
docker will automatically use the configured ssh key.
Now, I can bring up this very website with the following command:
docker --context arm compose --file docker-compose.prod.yml up -d
arm
is the name of the context (the VPS is an arm server), the docker-compose.prod.yml
is a local file
in this repo. Btw. for reasons, this website is not a naked bundle of static assets,
but instead a Rust application that serves said bundle.
And the interesting thing here is that if I want to reference secrets from a .env
file to provide
as environment variables to the running container, I can do just that. The local .env
file is read
and passed along in the raw docker
commands sent over the network to the daemon on arm
. This way,
no files have to be manually managed on the server. Neither compose files, nor .env
files.
This means that I can set up a new remote context, and immediately bring up my applications with a
single command, given that the ssh
target has a docker daemon running. Lovely.
Now you might think I would have a CI/CD pipeline that gets triggered when I push changes to some branch and execute this command to update the deployment. You would have been correct some time ago, but not anymore. As outlined above, I want to avoid third-party services as much as possible. So instead, I’ve started writing shell scripts to streamline the process of locally building docker images and installing them on the server. Shocking, I know. This is straight out of a 1997 nerd magazine. But it works perfectly for my use cases. I can utilize the docker layer cache to make builds blazing fast, I can separate commits logically from units of deployment and it feels good to have such flexibility.
Here’s the full deployment script that I invoke every time I redeploy this website:
#!/usr/bin/env bash
die() { echo "$*" 1>&2 ; exit 1; }
echo -e "Deploying marending.dev to production!"
[ -z "$(git status --porcelain)" ] || die "There are uncommitted changes"
cd service; version=$(cargo metadata --format-version=1 --no-deps | jq '.packages[0].version' | tr -d '"'); cd ..
echo "Latest version: ${version}"
next_version=$(echo ${version} | awk -F. -v OFS=. '{$NF += 1 ; print}')
read -p "Enter version to be deployed [${next_version}]: " new_version
new_version=${new_version:-${next_version}}
cd service; cargo set-version "${new_version}" || die "Failed to set version in Cargo.toml"; cd ..
docker buildx build -t "ghcr.io/beingflo/marending-dev:${new_version}" . || die "Failed to build docker image"
docker push "ghcr.io/beingflo/marending-dev:${new_version}" || die "Failed to push docker image"
sed -i '' -e "s/image: \"ghcr.io\/beingflo\/marending-dev:${version}\"/image: \"ghcr.io\/beingflo\/marending-dev:${new_version}\"/" ./docker-compose.prod.yml || die "Failed to write new version to docker compose file"
docker --context arm compose --file docker-compose.prod.yml pull || die "Failed to pull new image"
docker --context arm compose --file docker-compose.prod.yml up -d || die "Failed to bring compose up"
git commit -am "Release ${new_version}"
git tag "${new_version}"
git push origin --tags
Some notes: The die
function is just a handy way to exit early. The version handling is querying
the current version from cargo
, then presenting a choice to the user. Either the default of a patch
increment on the current version, or a user-provided version is chosen. This new version is then written
to the Cargo.toml
file and used to tag the resulting image appropriately.
Then the docker image is built, pushed into Github container registry and subsequently pulled from the server and spun up. This script took some tinkering, but now I use a version of this in every one of my projects.
I could have also avoided my use of a container registry by directly pushing the image to the server, but as docker registries are a rather standardized commodity at this point I don’t worry about vendor lock-in too much on this front.
For static frontend-only applications my deployment script looks significantly simpler:
#!/usr/bin/env bash
die() { echo "$*" 1>&2 ; exit 1; }
echo -e "Deploying rest.quest to production!"
[ -z "$(git status --porcelain)" ] || die "There are uncommitted changes"
npm run build || die "Build failed"
docker --context arm cp dist caddy:/srv/rest.quest || die "Failed to copy files to container"
Here I’m copying the dist
folder after a npm run build
straight into Caddy
s docker container.
I also have a separate deployment
repo that serves as the home for all the infrastructure related
files and configs like the Caddy config, compose file and the deployment script itself to roll out
a new config to the running Caddy container.
Conclusion
This is a rather old-school way of doing things. I would have scoffed at this not too long ago, but I’ve come to appreciate the simplicity. I understand every part of this setup and can fix any issues. I can’t say the same about the network layer in a kubernetes cluster for instance.
Further, this setup is also efficient. I’m building docker images locally on my laptop in seconds, rather than spinning up a cloud VM somewhere that takes minutes to build from scratch.
Lastly I want to emphasize that this is very much designed for my personal use in hobby projects. I would not recommend you use it in a team. There, the imposed rigor of an automatic CI run is desirable. But since I’m just having fun here, I might as well deviate from “best practices” at work.