I really want to run ceph because it fits a number of criteria I have: gradually adding storage, mismatched disks, fault tolerance, erasure encoding, encryption, support out-of-the-box from other software (like Incus).

But then I look at the hardware suggestions, and they seem like an up-front investment and ongoing cost to keep at least three machines evenly matched on RAM and physical storage. I also want more of a single-box NAS.

Would it be idiotic to put a ceph setup all on one machine? I could run three mons on it with separate physical device backing each so I don’t lose everything from a disk failure with those. I’m not too concerned about speed or network partitioning, this would be lukewarm storage for me.

  • TedvdB@feddit.nl
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    Why then not just use ZFS or BTRFS? Way less overhead.

    Ceph’s main advantage is the distribution of storage over multiple nodes, which you’re not planning on doing?

    • jkrtn@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      8 months ago

      I mean, yeah, I’d prefer ZFS but, unless I am missing something, it is a massive pain to add disks to an existing pool. You have to buy a new set of disks and create a new pool to transition from RAID z1 to z2. That’s basically the only reason it fails the criteria I have. I think I’d also prefer erasure encoding instead of z2, but it seems like regular scrub operations could keep it reliable.

      BTRFS sounds like it has too many footguns for me, and its raid5/6 equivalents are “not for production at this time.”

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 months ago

        LVM, mdraid, dm-crypt? LVM will let you make volumes and pools of basically any shape or size.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        8 months ago

        Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don’t create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don’t even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you’d get similar redundancy to moving from RAIDz1 to RAIDz2.

        Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.

        • jkrtn@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          “As easy as buying four same-sized disks all at once” is kinda missing the point.

          How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?

          If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            8 months ago

            You don’t migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.

            If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

            This is a problem. You don’t know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there’s space. As far as I know you can’t remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you’d have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.

            Removing RAIDz vdevs might become possible in the future. There’s already a feature that allows expanding existing RAIDz vdevs but it’s fairly new so I’m personally not considering it in my expansion plans.

          • bastion@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            No matter what setup you use, if you want redundancy, it’ll cost space. In a perfect world, 30% waste would allow you to lose up to 30% of your disk space and still be OK.

            …but that extra percentage of used space is the intrinsic cost.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            8 months ago

            What you lose in space, you gain in redundancy. As long as you’re not looking for the absolute least redundant setup, it’s not a bad tradeoff. Typically running a large stripe array with a single redundancy disk isn’t a great idea. And if you’re running mirrors anyway, you don’t lose any additional space to redundancy.