The original post: /r/selfhosted by /u/schizovivek on 2024-11-10 14:38:13.
Edit:
The images look to have come to be too large. apologies for that
This is my current setup:
I currently have 4 devices.
- 2 PIs
- 1 PI is currently just being used for AdguardHome.
- 1 PI is currently just used for RetroPie
- 1 mini PC with a USB to 5 bay sata enclosure. (so the NAS)
- The mini pc is the “power house”. It handles everything else (emby, immich, caddy, arr suite etc)
- PC that I work on and use to manage everything.
- honestly now that I think about it I actually use the PC only as an interface and ssh into the miniPC to manage everything
Managing this was easy enough since all my deploys are currently on my miniPC. Recently I started thinking of splitting things up a little.
I’d like to split items into
- light-weight with 24 hour availability (eg:)
- changedetection
- ntfy
- caddy
- adguardhome
- prowlarr
- high availability but needs more processing power (eg:)
- immich
- emby
- photoprism
- gitea
- jdownloader
- mediacms
- emulation only
- would like to have something like romm running on it to manage the game collection
With the emulation part nothing really changes. Overall I’d like to be able to manage everything from my PC in a way that makes it so I don’t have to log into each machine to deploy something on it.
Currently the compose files and config files are handled via a docker-stacks folder I have on my miniPC which I use to stow to my dockerapps folder. This way everything I need is versioned into gitea for docker configuration. Once stowed into the dockerapps folder I run an alias for the docker up command that handles the compose file being inside another folder. Eg for caddy all I do is dcf caddy
and it handles the env file and the compose file
I started looking into how I can remotely deploy something and found docker contexts. In order to test it out I created a context pi on my miniPC and then in order to deploy it I had to stow the compose in my miniPC as well as my PI.
I currently handle the stowing part by creating a makefile which I’ve configured to stow to a specific location so that’s not as much of work (now that I’ve figured it out :-P)
With this setup I tested for something simple as Caddy and can see that I sadly had 2 stows for the compose as well as the Caddyfile. 1 stow in the miniPC so I can run the docker compose up command and the other stow in the PI so that the Caddyfile is loaded where the app is actually deployed.
As I type, just jotting down what I feel I can do better
- Now that I think about contexts I guess I can manage all the compose files from my PC rather than maintaining them on my miniPC.
- I’d then need to create another repo to maintain the configuration files which I can clone and stow into the specific machines.
- I guess I can update the makefile to handle the ssh part of the doing something on the specific boxes
I’m pretty sure there are better ways to do this and am willing to give something new a try, hence this post. I wanted to hear the thoughts of how others manage their stacks. Basically wondering how I can control the compose files and the configurations from a central location. Appdata would still have to lie in the specific devices which is fine. (apologies in advance since I’m not that great in articulating my thoughts :-|)
Screengrab of how my docker-stacks folder looks like
Screengrab of my docker-stacks makefile