• 3 Posts
  • 415 Comments
Joined 11 months ago
cake
Cake day: March 19th, 2024

help-circle





  • Ah - yeah ive got trunk to each of the machines in my clusters, 9 vlans total, and of course I can add more whenever this way. I’m a bit of a glutton for naming and numbering structure too, so the purpose of the service determines which VLAN its on. Like Home Assistant has just about its own vlan, with sensors and misc tools in support of it all there. A different one for IoT devices by others (that I will never trust with internet access, so its initiate from another VLAN on the FW only, outbound can’t be initiated from any device on it, etc), one for work thats part of a site-to-site with work, with a few ports on the switch allocated that I can just plug in ad hoc, etc.

    Definitely helps to have the range to play it this way!


  • In my case, I don’t need the isolation of a VM, really I’m just looking to separate the service I’m running into something manageable and easy to move between hosts. I could do a VM for each, but I’d be adding overhead and power requirements without much benefit.

    And really that’d all it comes down to for me. Each service is its own LXC, from stuff most self-hosters use, to random one-offs I write. Managing it all stays in ansible for everything, and the structure is quite a bit simpler.

    When I do want to bring it elsewhere, I van package it up clean and toss that on a new LXC somewhere else quickly, like an 80 core monster with $16k in GPU thats already getting pushed hard, and knowing it will be of almost no impact to its main job while adding the service it needs.

    I do still have VMs, but that is to do things like dealing with windows. Especially specific versions, like a piece of software for some work stuff that requires XP or server 2008 specifically. Its pretty isolated though, not even allowed network access out. All my writes are to a thumb drive if I need to get something out of it (which is uniquely set as the thumb drive its allowed to see).

    So nothing that I couldn’t do a bunch of other ways, this is just the structure thats working best for me.


  • Tiny/mini/micro.

    You can grab a used box for under $200. Most I’ve picked up have been around $100-$125, then I drop in a new m.2 for the host, maybe add/change ram depending on what I got it with.

    Data lives on the NAS (really multiple for me, but besides the point here), and you’ll get waaaayyyyy more compute with a usff PC like that than you will with a pi or what a NAS can offer. They also run really light on power when you aren’t putting the CPU to work, so budget friendly in a bunch of ways.

    I’ve got a goal after a move my wife and I are planning to run the whole shebang on solar, with battery and a switch to utility power. I’ve got 10 of these little monsters now, after a recent addition, and its quite doable from my measurements of actual power usage.

    Which is a really long way of saying you may want to look at some tiny/mini/micro PCs.












  • Mostly equitable.

    Ive had a slightly higher failure rate with the Dells, but the sample size is too small to be relevant.

    The Lenovos more often than others ive found outfitted with a dGPU which comes in handy in some scenarios, but I think that comes down more on which enterprises more often purchase Lenovos and want the dGPU, and that its just what ive come across in the used/decommissioned territory.

    Short answer - they are basically all the same.