Currently, I run Unraid and have all of my services’ setup there as docker containers. While this is nice and easy to setup initially, it has some major downsides:

  • It’s fragile. Unraid is prone to bugs/crashes with docker that take down my containers. It’s also not resilient so when things break I have to log in and fiddle.
  • It’s mutable. I can’t use any infrastructure-as-code tools like terraform, and configuration sort of just exist in the UI. I can’t really roll back or recover easily.
  • It’s single-node. Everything is tied to my one big server that runs the NAS, but I’d rather have the NAS as a separate fairly low-power appliance and then have a separate machine to handle things like VMs and containers.

So I’m looking ahead and thinking about what the next iteration of my homelab will look like. While I like unraid for the storage stuff, I’m a little tired of wrangling it into a container orchestrator and hypervisor, and I think this year I’ll split that job out to a dedicated machine. I’m comfortable with, and in fact prefer, IaC over fancy UIs and so would love to be able to use terraform or Pulumi or something like that. I would prefer something multi-node, as I want to be able to tie multiple machines together. And I want something that is fault-tolerant, as I host services for friends and family that currently require a lot of manual intervention to fix when they go down.

So the question is: how do you all do this? Kubernetes, docker-compose, Hashicorp Nomad? Do you run k3s, Harvester, or what? I’d love to get an idea of what people are doing and why, so I can get some ideas as to what I might do.

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    10 months ago

    I would stay away from kubernets/k3/k8s. Unless you want to learn it for work purposes, it’s so overkill you can spend a month before you get things running. I know from experience. My current setup gives you options and has been reliable for me.

    NAS Box: Truenas Scale - You can have UnRaid fill this role.

    Services Hosting: Proxmox - I can spin up any VMs I need and lots of info online to do things like hardware passthrough to VMs.

    Containers: Debian VM - Debian makes a great server environment as it’s stable and well supported. I just make this VM a docker swarm host. I managed things with Portainer for a web interface.

    I keep data on the NAS and have containers access it over the network. Usually a NFS share.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      Second there. Running kubernetes at home is great - to learn it for work.

      If you don’t need to use it for work then you’re going to spend weeks if not months setting it up for very little payoff at home

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      How do you manage your services on that, docker compose files? I’m really trying to get away from the workflow of clicking around in some UI to configure everything, only for it to glitch out and disappear and I have to try and remember what things to click to get it back. It was my main problem with portainer that caused me to move away from it (I have separate issues with docker-compose but that’s another thing)

      • hi_its_me@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        10 months ago

        I have a similar setup to the above. Personally I use Docker Compose and backup up my compose scripts to the NAS.

      • khorak@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I personally stepped away from compose. You mentioned that you want a more declarative setup. Give Ansible a try. It is primarily for config management, but you can easily deploy containerized apps and correlate configs, hosts etc.

        I usually write roles for some more specialized setups like my HTTP reverse proxy, the arrs etc. Then I keep everything in my inventory and var files. I’m really happy and I really can tear things down and rebuild quickly. One thing to point out is that the compose module for Ansible is basically unusable. I use the docker container module instead. Works well so far and it keeps my containers running without restarting them unnecessarily.

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    10 months ago

    In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.

    I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.

    Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.

    • CubitOom@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      I would say that if you are going to host it at home then kubenetes is more complex. Bare metal kubernetes control plane management has some pitfalls. But if you were to use a cloud provider like linode or digital ocean and use their kubernetes service, then only real extra complexity is learning how to manage Kubernetes which is minimal.

      There is a decent hardware investment needed to run kubernetes if you want it to be fully HA (which I would argue means it needs to be a minimum of 2 clusters of 3 nodes each on different continents) but you could run a single node cluster with autoscaling at a cloud provider if you don’t need HA. I will say it’s nice not to have to worry about a service failing periodically as it will just transfer to another node in a few seconds automatically.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      This, I used to have a kubernetes setup but how much redudency can you really have at home. Do you have a generator? Multiple Internet lines?

      The fact is most hardware is highly reliable. Having good backups to restore from is all you need and you gain a huge improvement in simplicity which adds reliability in and of itself.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yeah I guess that’s true, I do think the other part about having configs done programatically is a lot more important anyway. If things go down but all it takes to get it back is to re-run the configs from files then it’s not so bad

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        More importantly, if you do things programmatically you will still have the information how you did it last time the next time you need to move to a new major version of something which is particularly important in a home setting where you don’t do tasks like that often.

  • sabreW4K3@lemmy.tf
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    10 months ago

    I can’t remember what I was watching, but I remember watching something where they said Kubernetes is designed for something so large in scale that the only reason people have heard about it is because some product manager asked what Google use and then demanded that they use it to replicate the success of Google and subsequently, hobbyists also followed and now a bunch of people are using stuff that’s poorly optimized for such small scale systems.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Haha yeah true, but it does come with the advantage that it’s super prevalent and so has a lot of tools and docs. Nearly every self-hosted service I use has a docs page for how to set it up with Kubernetes. (Although it’s not nearly as prevalent as plain docker)

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        With a basic understanding of how k8s works and an already running cluster, all one needs to know is how to run a service as a docker file to have it also run in k8s

  • FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    I am happy with my simple docker-compose setup - one root folder with one subfolder per project containing the compose file and any configuration mounted into the container. Traefik automatically exposes all services I want under a well-known URL using a single line in each compose file. Watchtower updates the containers.

    This has been running stable for over two years with probably 2-3 reboots in between. If my current NUC ever breaks I’ll set it up again using Podman instead of Docker, but aside from that I couldn’t be happier!

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      This seems like a sensible choice, but it would be a bit messy for multi-node which is the direction I’m heading in

  • forwardvoid@feddit.nl
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Portainer + caddy + watchtower, this will give you the benefits of containers without the complexity of Kubernetes. As someone who professionally works with Kubernetes, I agree with what other people have said here: “only run it if you want to learn it for professional use”.

    Portainer is a friendly UI for running containers. It supports docker compose as well. It helps with observability and ops.
    Caddy is an easy proxy with automatic Let’s Encrypt support.
    Watchtower will update and restart your containers if there’s an update.
    (Edit: formatting)

  • superpants@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    A plug for the pro Kubernetes crowd:

    I run microk8s on a 3 node cluster, using FluxCD to deploy and manage my services. I also work with Kubernetes at work, so I’m very familiar with the concepts. But I will never use anything else.

    If you want maximum control and flexibility, learn Kubernetes. For a lot of people (myself included) it’s overkill, but IMO it’s the best.

    My main gripe with docker-compose, which is what I used to use, is that service changes require access to the machine. I have to run commands on the host to alter services. With Kubernetes, and more precisely a GitOps model, you can just make a commit to a git repo and it will roll out.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      For your last point, portainer fixes that. I use portainer to pull compose files from my gitea instance. There is an option to auto update on git comit but I prefer to press the button to update.

      I write the compose files in vscode and push them to my repo.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      FWIW I manage docker compose files with ansible. Allows me to centrally manage them without the need to go logging into multiple vms. I also create a systemd service file to start/stop the containers (also managed with ansible).

      That said I’m starting to switch over to k8s as well (also with microk8s which has been the easiest to work with). Definitely overkill but I want to learn it.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yes very true, I really would much prefer GitOps as I feel… uneasy about how handwired and ephemeral my current setup is and would love it to be more declarative and idempotent. It does seem like Kubernetes is the way to do that.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    You should try out all the options you listed and the other recommendations and find what works best for you.

    I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.

    Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.

    If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.

    • seang96@spgrn.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I agree with this, though it’s not the easiest option. Ceph is amazing for storage, I am using mini pc nucs for my cluster and to expand storage I simply plan to add nodes and get the largest sata and m.2 SSD.

      I recently setup velero for automated backups. It even is configured to dump my HA postgrrd DB and then backup the dump folder.

      Other cool experiences I can think of is netdata with Prometheus metrics auto scraping pods by their annotations to pull their statistics in, and nothing is more satisfying then making a service highly available.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Thanks. Yeah I’m temped to try kubernetes because of what you mentioned. I really like that every part that I need (ingress controller, certs, etc) are considered part of the core service and are built in. Right now I just have to run that stuff like it’s own service and wire everything up by hand. I don’t think I mind the extra overhead of kubernetes either, I love to tinker with that sort of thing anyway!

      I think I will try a couple of things though. Maybe find a set of services to deploy with each and compare the experiences.

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.

        But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.

        You can also look at the cloud native computing foundation’s list of projects. It’s a good list of things that work well.

        Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it

        I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.

        I have heard good things about nomad too but I have yet to try it.

  • grehund@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Proxmox. Currently considering upgrading from a single node to a 3 node Cluster for Ceph.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Podman pods + systemd units to manage pods lifecycle. Ansible to deploy the base OS requirements, the ancillary services (SSH, backups, monitoring…), and the pods/containers/services themselves.

  • Nico@r.dcotta.eu
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    I see no one else commented my stack, so I suggest:

    Nomad for managing containers if you want something high availability. Essentially the same as k8s but much much much simpler to deploy, learn, and maintain. Perfect for homelabs imo. Most of the concepts of Nomad translate well to k8s if you do want to learn it later. It integrates really well with Terraform too if you are also hoping to learn that, but it’s not a requirement.

    NixOS for managing the bare metal. It’s a lot more work to learn than say, Debian, but it is just as stable, and all configuration will be defined as code, down to the bootloader config (no bash scripts!). This makes it super robust. You can also deploy it remotely. Once you grow beyond a handful of nodes it’s important to use a config management tool, and Nix has been by far my favourite so far.

    If you really want everything to be infra-as-code, you can manage cloud providers via Terraform too.

    For networking I use wireguard, and configure it with NixOS. Specifically, I have a mesh network where every node can reach every node without extra hops. This is a requirement if you don’t want a single point of failure (hub and spoke) to disconnect your entire cluster.

    Everything in my setup is defined ‘as-code’, immutable, and multi-node (I have 7 machines) which seems to be what you want, from what you say in your post. I’ll leave my repo here, and I’m happy to answer questions!

    My opinions on the alternatives:

    Docker compose is great but doesn’t scale if you want high availability (ie, have a container be rescheduled on node failure). If you don’t want higher availability, anything more than docker might be overkill.

    Ansible and Puppet are alright but are super stateful, and require scripting. If you want immutability you will love Nix/NixOS

    k8s works (I use it at work) but is extremely hard to get right, even for well-resourced infra teams. Nomad achieves the same but with the leanings of having come afterwards, and without the history.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Thanks for this, I’ve been sort if interested in both Nomad and NixOS for the exact reasons it seems like you use them. Thanks for linking that repo, I’ll check it out for inspiration!

      Do you find that you sometimes struggle to get things working in Nomad? My one worry is that, because it’s not as well established as kubernetes or docker, there won’t be good compatibility or documentation. For example most services in their docs will show how to deploy with kubernetes or docker, but rarely Nomad. Do you find that it’s easy enough to translate these instructions that it doesn’t matter?

      • Nico@r.dcotta.eu
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        Good question! So it depends, but TLDR: imo it’s worth it, or it’s fine, but it’s easy to try yourself and see

        most services in their docs will show how to deploy with kubernetes or docker, but rarely Nomad

        You are absolutely correct, but I do find that for the large large majority of things, either you can find an online Nomad config, or the Nomad config is easy enough to translate from Docker compose. Only some complicated larger deployments (think Immich) are harder to translate, but even then it just takes some trial and error. I really do think that extra trouble of translating is very much worth the pain you save yourself in terms of deploying k8s though. You might spend a bit longer typing out the Nomad job file yourself, but in exchange you are thankfully not maintaining the k8s cluster.

        As far Nomad-specific documentation goes, I think it the official one is more than good enough.

        You mentioned compatibility. So far I have not found anything I really wanted that was not possible to set up in Nomad. Nomad does CNI and CSI, which is the same API k8s uses, so thinkgs working there will work for Nomad. Other things you would use with docker compose or k8s don’t work with Nomad, but you don’t need them (for example: portainer or metrics exporters) because Nomad has them natively already (this blog discusses that).

        As you can see I am pretty opinionated towards Nomad - I have been using it in my previous job in prod, and in my home-lab for a year now, and I am very happy with it. If you would like to read more I recommend this blog post. For Nomad on NixOS I wrote this one.

        For now my advice is: just try nomad yourself (as simple as running nomad agent -dev on your laptop), run the tutorial, and see if it was easy enough that you see yourself using it for the rest of your containers. If you need more help you are welcome to DM me :)

    • jkrtn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Could you give a quick example of using NixOS configuration to launch a machine or deploying something remotely? I’m just starting to move beyond a single machine at home. I’d really like to get transition to infra as code.

      • Nico@r.dcotta.eu
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I recommend starting with ZeroToNix’s docs and then moving on to nixos.wiki, but here is a minimal, working example that I could deploy to a hetzner VPS that only has nix and ssh installed:

        { config, pkgs, ... }: {
          # generated, this will set up partitions and bootloader in a separate file
          imports = [ ./hardware-configuration.nix ];
          zramSwap.enable = true;
          networking.hostName = "miki";
          # configures SSH daemon with a public key so we can ssh in again
          services.openssh.enable = true;
          users.users.root.openssh.authorizedKeys.keys = [ ''ssh-ed25519 AAAAC3NzaC1lNDI1NTE5AAAAIPJ7FM3wEuWoVuxRkWnh9PNEtG+HOcwcZIt6Qg/Y1jka'' ];
          # creates a timmy user with sudo access and wget installed
          users.users.timmy = {
            isNormalUser = true;
            extraGroups = [ "networkmanager" "wheel" "sudo" ];
            packages = with pkgs; [ wget ];
          };
          # open up SSH port
          networking.firewall.allowedTCPPorts = [ 22 ];
          # start nginx, assumes HTML is present at `/var/www`
          services.nginx = {
            enable = true;
            virtualHosts."default" = {
              forceSSL = true;            # Redirect HTTP clients to an HTTPs connection
              default = true;             # Always use this host, no matter the host name
              root = /var/www;        # Set the web root to ser
            };
          };
          system.stateVersion = "22.11";
        }
        

        This sets up a machine, configures the usual stuff like the ssh daemon, creates a user, and sets up an nginx server. To deploy it you would run nixos-rebuild --target-host root@10.0.0.1 switch. Other tools exist (I use colmena but the idea is the same). Note how easy it was to set up nginx! If I was setting Nomad up, I would just do services.nomad.enable = true.

        As you can see some things you will have to learn (the nix language, what the configs are…) but I think it is worth it.

        • nopersonalspace@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          This is awesome, ZeroToNix is exactly what I was looking for. I’ve been interested in trying NixOS for a while but I always found the documentation obtuse (extensive, which is great, but not super beginner friendly). I’ll give it a try!

          • Nico@r.dcotta.eu
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Good luck on your Nix journey! Happy to help if you have questions.

            Of all the tech I use, I think Nix is the most ‘avant-garde’ in that it is super different from the usual methods (scripting, stateful things), but works very well once past the paradigm shift and the learning curve that entails.

        • jkrtn@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          This is such a wealth of information, thank you! I’m really excited to try this out.

    • johntash@eviltoast.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Hey, your stack is pretty similar to mine. One thing I recently started testing is Seaweedfs. I saw it listed in your repo too, how are you liking it so far? And do you use it on all of your nodes?

      • Nico@r.dcotta.eu
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I struggled a bit to get it up and running well, but now I am happy with it. It’s not too hard to deploy (at least easier than the alternatives), it has CSI which for me was big, and it has erasure coding. The dev that maintains it (yes, the one dev) is very responsive.

        It has trade offs, so depending on your needs, I recommend it. Backing store for stateful workloads like postgres DBs? Absolutely not. Large S3 store (with an option for filesystem mount) for storing lots of files? Yes! In that regard it’s good for stuff like Lemmy’s pictrs or immich. I use it as my own Google drive. You can easily replicate in your own cluster, or back it up to an external cloud provider. You can mount it via FUSE on your personal machine too.

        Feel free to browse through my setup - if you have specific questions I am happy to answer them.

        • johntash@eviltoast.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Thanks! I’ll do some testing over the weekend and see how it goes.

          While I’d love to be able to use it for postgres, I figured that wouldn’t work out well so probably won’t try it any time soon. I do have several apps that use sqlite databases though, do you think those would have any issues? e.g. trilium, ntfy, ghost

          The main downside to most of the distributed/clustered storage that I’ve tried is they always seem to corrupt sqlite db files due to not supporting locking or some other posix feature. Reading through some older github issues, it looks like that is something the dev of seaweedfs fixed hopefully.

          • Nico@r.dcotta.eu
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            The problem with using seaweedfs to a back your DBs is more on the filesystem than the implementations of POSIX features. When you are writing to a file, and the connection to seaweedfs breaks (container restart, wifi, you name it), then you might end up with a half-written file. If you upload pictures, this is unlikely, but DBs are doing several writes per second usually. So it is more likely one of those gets interrupted. In my case, my grafana sqlite DB would get corrupted every other week.

            What I recommend is using DBs natively in your node’s filesystem, and backing them up to seaweedfs periodically instead. That way your DBs ‘work’ but you can get them running again, and the backup is replicated in the distributed filesystem.

            • nopersonalspace@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              That’s an interesting issue. Do you think the problem would be the same for any CSI plugin? I’m thinking of using my NAS as the storage brains of the operation and hooking it up with NFS or something, but would that have issues with stateful stuff like DB’s too?

              • Nico@r.dcotta.eu
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                I have never used NFS, but I think it would fare much better than seaweedfs because it uses Fuse to implement CSI. So for NFS I am sure the protocol would consider half-assed writes

                would be the same for any CSI plugin

                No, it would depend on the CSI plugin and how it is implemented. Ceph for example I know it has several, and cloud providers offer CSI volumes for their block storage (AWS EBS, GCP PD), and they will all perform differently. See this comment from a seaweedfs issue:

                […] It is always better to run databases on host volumes if you can (or on volumes provided by AWS EBS or similar). But with Seaweedfs especially if you are running postgres with seaweedfs-csi volume be prepared for data corruption. Seaweefs-csi uses FUSE, if anything happens to seaweedfs-csi (Nomad client restart, docker restart, OOM) mount will be lost and data corruption will happen.

                Running on CEPH (since CEPH CSI using Kernel driver not FUSE) is acceptable if you fine with low TPS.

                I found it was easier to make recoverable, backed up, host volumes than to make DBs run on high availability filesystems like seaweedfs (I admit I have not tried Ceph - the deployment looked a bit complicated/overkill for a homelab).

                Postgres and sqlite are just not made for that environment. To run a high-availability DB, it is better to run a distributed DB made for that (think etcd, cassandra) than to run a non-distributed DB on top of a distributed filesystem.

                Good luck! :)

            • johntash@eviltoast.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              What I do right now is I have a rclone sidecar container that uploads files in a directory every few seconds, and I also have another init sidecar that runs before the main application and downloads those files (incl sqlite dbs) to the normal disk. This works okay but feels pretty clunky and can still result in stuff getting corrupted because I’m just backing up the db files and not using any sqlite commands to actually back up the db to another file that isn’t in-use first.

              How do you handle a job going from one nomad node to another? Or do you pin jobs like grafana to specific hosts?

              • Nico@r.dcotta.eu
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                Nomad has host volumes - so you can tell it to mount a folder from the machine on the container, and it will only schedule that container on machines that have that folder. So yes, effectively you pin the workload, thus introducing a SPOF - I do not love it but Grafana only supports sqlite and postgres, so making those HA would require failover setups which is a bit much for a homelab :')

                For backing up, you can use the sqlite command periodically (do cron job or Nomad periodic job) and then upload the backup to some external, safe storage (could be seaweedfs or S3!). For postgres you can use something like this.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    HA Home Assistant automation software
    ~ High Availability
    HTTP Hypertext Transfer Protocol, the Web
    NAS Network-Attached Storage
    NUC Next Unit of Computing brand of Intel small computers
    SSH Secure Shell for remote terminal access
    VPS Virtual Private Server (opposed to shared hosting)
    k8s Kubernetes container management package
    nginx Popular HTTP server

    [Thread #417 for this sub, first seen 10th Jan 2024, 04:15] [FAQ] [Full list] [Contact] [Source code]

  • MSgtRedFox@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I really enjoy these type of conversations, learn a lot.

    Since you’ve gotten lots of good advice on container manager, I’ll encourage your desire for IaC/DevOps CM, etc.

    I believe all the leading CM choices support what you’re wanting to do. I can’t guide you on which one to chose, but just browse through the options or functions your favorite does for the Kx container solution you go with.

    I use SALT because of Security Onion, and open source IDS. I have all my nix systems being babysat by SALT, and can have a new x-arr media server, NGINX, blog, etc running in the amount of time to deploy the template (I use vSphere) and salt applies the desired state. Back up and restore a mount folder, np. IaC is only limited by your imagination. I have salt also specifying all the containers I have running, defining the config files, etc. Basically poor mans/simpleton kub.

    I suspect you already know this, but if there isn’t a module that directly does what you want like running SQL specific functions, you can just have it run programmatic CLI files on the host, or in the container for you.

    I am in the process of moving my IaC code from manager file system to Gitlab. I imagine you’d do this from jump street. Have fun.

  • Samsy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I was familiar with just organise my docker-compose containers without any frontend. But I discovered casaOS, which make things pretty simple. An AppStore and a SMB-Shared File manager gave me a really good workflow. Things that aren’t on the AppStore can be handled outside of Casa, too.

    PS. But never make the mistake to integrate the outside handled containers, this mess things up.

    • nopersonalspace@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Thanks, yeah I’ve heard good things about casaOS. I think that I’m trying to move in the other direction though: fewer UI’s and more CLI’s + Configuration files.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    9
    ·
    10 months ago

    First, hire a team of energetic full-time container bros. Half of them will help architect your setup, and other half will focus entirely on supporting the container cult.