I’ve been slowly moving along in this self-hosting journey and now have a number of services that I regularly use and depend on. Of course I’m backing things up, but I also still worry about screwing up my server and having to rollback/rebuild/fix whatever got messed up.

I’m just curious, for those of you with home labs, do you use a testing environment of some kind or do you just push whatever your working on straight to "production

  • edit: grammar
  • morpheus17pro@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 minutes ago

    In my case, yes. My setup is managed using Ansible playbooks, so I have a dev inventory and a playbook that spins up a virtualized environment that mimics (as much as possible, as there are a few details that cannot be fully replicated) my home lab.

    That way, I usually prepare my new setups on dev, and then deploy on my pro setup and test with the few aspects I cannot reproduce in dev.

    Finally, I have everything backed by a (private) git repo.

  • HumanPerson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Eh, I sometimes spin up a temporary docker container for some nonsense on a separate computer. I usually just go for it after checking no one is on and backing up necessary data.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    13 hours ago

    Sir, every professional developer knows there’s never time and people to maintain the testing environment so testing is done in production! That testing environment you’re dreaming of is missed shareholder value.

  • ambitiousslab@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    For services only I depend on, I have production-only. Since I can only inflict damage on myself, and can often work around problems.

    For the XMPP server my friends and family also depend on, I have a dedicated nonprod VPS. My services are driven by ansible playbooks, so I’ll tweak the playbook with whatever change I want to make works in nonprod, before running the same playbook against prod.

    Whenever there’s a new Debian Stable release, I’ll rebuild the servers completely, to try and prevent “drift” between the nonprod and prod versions (not that I change things often enough for this to become a big problem). This is also the big test of my backups, which so far haven’t been needed in a “real” emergency 🤞

  • notabot@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    I manage all my homelab infra stuff via ansible and run services via kubenetes. All the ansible playbooks are in git, so I can roll back if I screw something up, and I test it on a sacrificial VM first when I can. Running services in kubenetes means I can spin up new instances and test them before putting them live.

    Working like that makes it all a lot more relaxing as I can be confident in my changes, and back them out if I still get it wrong.

  • JovialSodium@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    13 hours ago

    Nope. I fiddle until it does what I want. If the thing I’m working on is complex or I’m struggling with it I’ll keep versions of configs. And I back up working configs via an rsync job. Which isn’t a particularly robust solution but I’m content with it for my needs.

  • N0x0n@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    14 hours ago

    Production is my testing lab, but only in my homelab ! I guess I don’t care to perfectly secure my services (really dumb and easy passwords, no 2fa, not hiding plain sight passwords…) because I’m not directly exposing them to the web and accessing them externally via Wireguard ! That’s really bad practice though, but any time soon will probably clean up that mess, but right now I can’t, I have to cook some eggs…

    There are 2 things though I actually do have some more complex workflow:

    • Rather complex incremental automated backup script for my docker container volumes, databases, config files, compose files.

    • Self-hosted mini-CA to access all my services via a nice .lab domain and get rid of that pesky warning on my devices.

    I always do some tests if my backups are working on a VM on my personal desktop computer, because no backup means that all those years of tinkering for nothing… This will bring up some nasty depression…

    Edit: If have a rather small homelab, everything on an old laptop, still quite happy with the result and works as expected.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 hours ago

    No testing environment in my home lab so far.

    But on the other hand, no planned builds either. Just fiddling around til it works.

    I am currently planning for new hardware, and then doing it all with build scripts there, as fully automated as possible. The whole setup, from scratch. But for that I need to do some learning first.

    So the new hardware is going to be it’s own test environment for a good while, until it turns into production.

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 hours ago

    I personally use my home lab to test and learn, and I try to mimic a corporate environment. I have multiple instances of DNS, proxy, etc and I have a “prod” and a separate “staging” k8s environment. I try as much as possible, without going nuts about it, to update and try new changes that might be breaking in the staging cluster.

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    My latest project runs on a VM I use vscode’s ssh editing feature on. I edit the only copy of the file in existence (I have made no backup and there is no version control) and then I restart the systems service.

    So what if I mess it up? Big deal. The discord bot goes down for a few minutes and I fix it.

    Same goes for the machine configs. Ideally the machines are stable, the critical ones get backups, and if they aren’t stable then I suppose the best way to fix it would be in prod ( my VMs run debian, they’re stable).

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.