Hi, so I want to building a pc for a home server (?) or NAS. I dont really know whats the most appropriate term but what I intend to build is a one pc for my household. currently my requirement is one work ‘pc’ capable of heavy 3d modeling one light work pc. two 4k gaming tvs. (they most likely wont be used at the same time)

my knowledge of technical stuff is bretty basic so please be patient with me.

before, i used my steam deck to stream my work pc using parsec but i thought i just want to jump all in on linux and using vm to use more niche 3d softwares.

my budget is flexible as long as i dont need to use enterprise hardware. also i heard nvidia is not good for linux so i’d like to confirm if that is still the case as im thinking of using 5090 if not, i hope amd releases an equivalent capable card or if any according my quick research suggest.

as for linux, the only distro (?) i ever used is the steam deck one and i love it. im not a programmer or even remotely capable one so i’d like to avoid anything that has to be manually typing commands at terminal but im open to surface level tinkering.

thank you for your time

  • monovergent 🛠️@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    It’s certainly doable and something like that was my setup for a few years. There isn’t much in the way of distros or software packages that provide such a ‘personal multiseat’ configuration out of the box.

    I wanted bare metal GUI access, so instead of using Proxmox, I went about configuring Debian to the task. This might not directly answer any questions, but here's an idea of what it looked like.

    Hardware

    • i7, 48 GB RAM, 500 W PSU
    • GTX 1650 (passed through to VM), Radeon R5 340X (basic bare metal output)
    • 60 GB SSD boot disk
    • 1 TB SSD for VM images
    • 2 x 4 TB HDD for NAS
    • 1 TB HDD for testing, “overflow”, etc.

    Boot disk

    • Debian stable with XFCE
    • Virtual machines set up through virt-manager and each port forwarded to LAN
    • unattended-upgrades, ufw / iptables firewall
    • GUI more for ease of management, software on bare metal kept to a minimum

    Virtual machines / (RAM allotment)

    • Desktop (10 GB): I would use this VM while seated at the machine for productivity and web browsing.
    • NAS / media server (4 GB): both 4 TB HDDs passed through to this VM, which hosted a Samba file server and Jellyfin. Also served as file storage for a couple other VMs via internal connections. 4 TB of usable capacity since I set it to rsync to the second drive at 02:30 every morning.
    • Misc. services (4 GB): second Samba file server for devices I wanted to sync but didn’t trust with access to my full 4 TB library. Also an Apache server to host a couple of HTML pages on LAN. Various other services tested here as well.
    • Windows (8 GB)
    • GPU access (16 GB): GTX 1650 forwarded here. Intended for gaming, but ended up using it for Stable Diffusion and LLMs for reasons below.

    I’d suggest starting with anything graphically intensive running on bare metal and setting up a VM with virt-manager / Virtualbox / etc. for the NAS part. Get a couple of disks specifically to pass through to the NAS VM, forward its ports to LAN, and connect to them on the host as you would any other machine. For a desk further away, you may be able to get away with a KVM extender, but I can’t say I’ve any experience with them.

    If you try to virtualize everything like I did, there’s a couple of hurdles:

    • Much time and manual configuration in the command line is needed
    • Atrocious graphical and input latency on remote connections
    • Very high RAM usage
    • Input glitches and general slowness on the VM with GPU passthrough, remained unresolved despite scouring tutorials from people who somehow managed to get buttery-smooth gaming in a VM
    • Lots of bandwidth used while updating all of the VMs. Probably optimizable, but not out of the box.

    Go for AMD if you can, but NVIDIA hasn’t given me much trouble either. Make sure to install the driver from your distro’s repo, not NVIDIA’s website. IMO, this is less of an issue if you decide to pass through the GPU to a VM since any NVIDIA driver shenanigans will be contained to the VM.

    • blinx615@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      I got used EPYC stuff and a 3090, but basically the same template; just a few more resources.

      • CPU: AMD EPYC 7542 (16 cores / 32 threads)
      • Motherboard: Supermicro H12SSL-i
      • Memory: Samsung DDR4 8×32GB
      • GPU: EVGA RTX 3090 FTW3 24GB

      However, I haven’t run into some of the issues you had. With the proxmox host on wired ethernet and my laptop on 5GHz wifi from about 10ft away from the access point I can easily play Rocket League with no noticeable latency, 1440p 120Hz. I’m using sunshine on a windows VM and moonlight on Fedora. It did, indeed, take a crapload of fiddling and I consider myself pretty adept at these things, but it can be done. :D

      I also swap the GPU between two VMs. I have a Ubuntu VM I use for AI workloads for fiddling around. On that one, I just ssh in and the GPU is 100% utilized for AI. Planning to add another GPU in the future (or a few).

      Can’t speak to remote connections, but my previous experience with cloud providers tells me it might be good enough for slow paced games, but it will fail horribly on anything really latency dependent. Best case scenario is the latency is off by just enough to make you lose your mind, or worse, you get use to the weird remote latency and then get all screwed up when you play at home.