I’ve been slowly moving along in this self-hosting journey and now have a number of services that I regularly use and depend on. Of course I’m backing things up, but I also still worry about screwing up my server and having to rollback/rebuild/fix whatever got messed up.

I’m just curious, for those of you with home labs, do you use a testing environment of some kind or do you just push whatever your working on straight to "production

  • edit: grammar
  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 hours ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.