Adding to this: doesn’t CAD usually want 3D acceleration? I would definitely try running the CAD software with the same VM configuration you plan to use in your Proxmox VPS first before progressing to make sure it (a works at all and b) is responsive enough. You could even try nesting Proxmox in Proxmox to emulate the kind of performance you’d had on a VPS.
SnipeIT just cares about serial numbers, models and manufacturers (you can just use a serial number in the asset tag section) for assets and I think consumables drop a bunch of those requirements. You might be able to put groceries under consumables? I’m less familiar with consumables in SnipeIT to be honest.
SnipeIT is really good and supports SSO including via LDAP.
They don’t need to be interested though. You could conceivably dump all the password you collect in an attack and just start trying them automatically like you would any other breach. Find a bunch of bank accounts and your chances you getting away with millions are high. Not to mention: a breach like this means changing all your saved passwords to re-secure them which is a multi-day affair.
Self-hosting removes the risk of somebody compromising Bitwarden’s servers and adding malicious javascript to send off your master password to a bad actor instead of just processing it locally like it’s designed to.
I don’t think ZFS can do anything for you if you have bad memory other than help in diagnosing. I’ve had two machines running ZFS where they had memory go bad and every disk in the pool showed data corruption errors for that write and so the data was unrecoverable. Memory was later confirmed to be the problem with a Memtest run.
What distro and version of that distro are you using? Did you install gpg from the repository or elsewhere? What version of gpg are you running?
The OOM killer is particularly bad with ZFS since the kernel doesn’t by default (at least on Ubuntu 22.04 and Debian 12 where I use it) see the ZFS as cache and so thinks its out of memory when really ZFS just needs to free up some of its cache, which happens after the OOM killer has already killed my most important VM. So I’m left running swap to avoid the OOM killer going around causing chaos.
The problem is if anti-cheat does not have full access but the cheat does, the cheat can just hide itself. Same for anti-virus vs viruses. It’s particularly nasty on free-to-play games where ban evading really just means you have to get a new e-mail. It’s the same reason why some anti-cheats block running games in VMs. Is it fool proof? Hell no! Does it deter anybody not willing to buy hardware to evade VM detection or run the cheat on completely separate hardware? Yes.
Personally, I’d prefer having a stake/reputation system where one can argue that they can be trusted with weaker anti-cheat because if you do detect cheating then I lose multiplayer/trading/cosmetics on the account I’ve spent $80 USD or more on. Effectively making the cost of cheating $80 minimum for each failed attempt. Haven’t spent $80 yet? Then use the aggressive anti-cheat.
I think that also causes issues for roaming profiles and folder redirection. If roaming is turned on then everything in the %appdata%\roaming folder is synced to a server. %AppData%\Local is not. So if your app is using %AppData%\Roaming for temporary data then you are causing a whole bunch on unnecessary IO. Same for using Documents since that if often synced.
Invidious still seems to work for VODs provided the instance doesn’t get restricted. Livestreams have been broken for ages though.
I don’t really see the advantage here besides orchestration tools unless the top secret cloud machines can still share it’s resources with public cloud to recoup costs?
So much better than my FunnelWAP. Best it can do is 100 KillerBytes. :(
Could it be a fear of a software patent relating to the design? Back in the day Apple had one for swipe to unlock that prompted Android to use different patterns.
Mentoning Iceweasel in 2024?! Where did you find this meme?! Debian stable?!
Last time they’ll ever do that! Pass the buck of hosting web-facing Plex servers onto somebody else.