this post was submitted on 07 Jul 2023
142 points (98.0% liked)

Selfhosted

40347 readers
330 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] jordanwhite1@lemmy.world 53 points 1 year ago (1 children)

I would documented everything as I go.

I am a hobbyist running a proxmox server with a docker host for media server, a plex host, a nas host, and home assistant host.

I feel if It were to break It would take me a long time to rebuild.

[–] bmarinov@lemmy.world 21 points 1 year ago (1 children)

Ansible everything and automate as you go. It is slower, but if it's not your first time setting something up it's not too bad. Right now I literally couldn't care less if the SD on one of my raspberry pi's dies. Or my monitoring backend needs to be reinstalled.

[–] Notorious@lemmy.link 10 points 1 year ago (2 children)

IMO ansible is over kill for my homelab. All of my docker containers live on two servers. One remote and one at home. Both are built with docker compose and are backed up along with their data weekly to both servers and third party cloud backup. In the event one of them fails I have two copies of the data and could have everything back up and running in under 30 minutes.

I also don’t like that Ansible is owned by RedHat. They’ve shown recently they have zero care for their users.

[–] echo@sopuli.xyz 3 points 1 year ago

if by "their users" you mean people who use rebuilds of RHEL ig

load more comments (1 replies)
[–] coldhotman@nrsk.no 34 points 1 year ago* (last edited 1 year ago)
[–] TechieDamien@lemmy.ml 24 points 1 year ago (5 children)

I would have taken a deep dive into docker and containerised pretty much everything.

[–] Toribor@corndog.uk 4 points 1 year ago

Converting my environment to be mostly containerized was a bit of a slow process that taught me a lot, but now I can try out new applications and configurations at such an accelerated rate it's crazy. Once I got the hang of Docker (and Ansible) it became so easy to try new things, tear them down and try again. Moving services around, backing up or restoring data is way easier.

I can't overstate how impactful containerization has been to my self hosting workflow.

[–] AdmiralAckbar@lemmy.world 3 points 1 year ago

Same here. Now I'm half docker and half random other stuff.

[–] spez_@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

I'm mostly docker. I want to selfhost Lemmy but there's no one-click Docker Compsoe / Portainer installer yet (for Swag / Nginx proxy manager) so I won't until it's ready

load more comments (2 replies)
[–] tejrik@lemmy.sdf.org 16 points 1 year ago* (last edited 1 year ago) (1 children)

I wouldn't change anything, I like fixing things as I go. Doing things right the first time is only nice when I know exactly what I'm doing!

That being said, in my current enviroment, I made a mistake when I discovered docker compose. I saw how wonderfully simply it made deployment and helped with version control and decided to dump every single service into one singular docker-compose.yaml. I would separate services next time into at least their relevant categories for ease of making changes later.

Better yet I would automate deployment with Ansible... But that's my next step in learning and I can fix both mistakes while I go next time!

[–] conrad82@lemmy.world 3 points 1 year ago (1 children)

I do the same. I use caddy reverse proxy, and find it useful to use the container name for url, and no ports exposed

What is the benefit for making changes with separate files?

[–] wraith@lemm.ee 4 points 1 year ago (1 children)

If you have relevant containers (e.g. the *arr stack) then you can bring all of them up with a single docker compose command (or pull fresh versions etc.). If everything is in a single file then you have to manually pull/start/stop each container or else you have to do it to everything at once.

load more comments (1 replies)
[–] ThorrJo@lemmy.sdf.org 14 points 1 year ago (1 children)

Go with used & refurb business PCs right out of the gate instead of fucking around with SBCs like the Pi.

Go with "1-liter" aka Ultra Small Form Factor right away instead of starting with SFF. (I don't have a permanent residence at the moment so this makes sense for me)

load more comments (1 replies)
[–] Toribor@corndog.uk 13 points 1 year ago* (last edited 1 year ago)

I should have learned Ansible earlier.

Docker compose helped me get started with containers but I kept having to push out new config files and manually cycle services. Now I have Ansible roles that can configure and deploy apps from scratch without me even needing to back up config files at all.

Most of my documentation has gone away entirely, I don't need to remember things when they are defined in code.

[–] brad@toad.work 11 points 1 year ago

For me:

  • Document things (configs, ports, etc) as I go
  • Uniform folder layout for everything (my first couple of servers were a bit wild-westy)
  • Choosing and utilizing some reasonable method of assigning ports to things. I do not even want to explain what I need to do when I forget what port something in this setup is using.
[–] stanleytweedle@lemmy.world 9 points 1 year ago (1 children)

Buy an actual NAS instead of a rats nest of USB hub and drives. But now it works so I'm too lazy and cheap to migrate it off.

load more comments (1 replies)
[–] Anarch157a@lemmy.world 8 points 1 year ago (3 children)

I already did a few months ago. My setup was a mess, everything tacked on the host OS, some stuff installed directly, others as docker, firewall was just a bunch of hand-written iptables rules...

I got a newer motherboard and CPU to replace my ageing i5-2500K, so I decided to start from scratch.

First order of business: Something to manage VMs and containers. Second: a decent firewall. Third: One app, one container.

I ended up with:

  • Proxmox as VM and container manager
  • OPNSense as firewall. Server has 3 network cards (1 built-in, 2 on PCIe slots), the 2 add-ons are passed through to OPNSense, the built in is for managing Proxmox and for the containers .
  • A whole bunch of LXC containers running all sorts of stuff.

Things look a lot more professional and clean, and it's all much easier to manage.

load more comments (3 replies)
[–] das@lemellem.dasonic.xyz 8 points 1 year ago

I would have gone with an Intel CPU to make use of iGPU for transcoding and probably larger hard drives.

I also would have written down my MariaDB admin password... Whoops

[–] Showroom7561@lemmy.ca 8 points 1 year ago (2 children)

Instead of a 4-bay NAS, I would have gone with a 6-bay.

You only realize just how expensive it is to expand on your space when you have to REPLACE HDDs rather than simply adding more.

[–] billm@lemmy.oursphere.space 6 points 1 year ago

Yes, but you'll be wishing you had 8 bays when you fill the 6 :) At some point, you have to replace disks to really increase space, don't make your RAID volumes consist of more disks than you can reasonably afford to replace at one time. Second lesson, if you have spare drive bays, use them as part of your upgrade strategy, not as additional storage. Started this last iteration with 6x3tb drives in a raidz2 vdev, opted to add another 6x3tb vdev instead of biting the bullet and upgrading. To add more storage I need to replace 6 drives. Instead I built a second NAS to backup the primary and am pulling all 12 disks and dropping back to 6. If/when I increase storage, I'll drop 6 new ones in and MOVE the data instead of adding capacity.

[–] lemmy@lemmy.nsw2.xyz 7 points 1 year ago

Setup for high availability. I have a hard time taking things down now since other people rely on my setup being on.

[–] nick@nickbuilds.net 7 points 1 year ago

Actually plan things and research. Too many of my decisions come back to bite me because I don't plan out stuff like networking, resources, hard drive layouts..

also documentation for sure

[–] clavismil@lemmy.world 6 points 1 year ago

Make sure my proxmox desktop build can do GPU passthrough.

[–] misaloun@reddthat.com 6 points 1 year ago (6 children)

I always redo it lol, which is kind of a waste but I enjoy it.

Maybe a related question is what I wish I could do if I had the time (which I will do eventually. Some I plan to do very soon):

  • self host wireguard instead of using tailscale
  • self host a ACME-like setup for self signed certificates for TLS and HTTPS
  • self host encrypted git server for private stuff
  • setup a file watcher on clients to sync my notes on-save automatically using rsync (yes I know I can use syncthing. Don't wanna!)
load more comments (6 replies)
[–] KitchenNo2246@lemmy.world 6 points 1 year ago (1 children)
[–] pinkolik@random-hero.com 2 points 1 year ago

That was my mistake when I tried to host literally everything on an Orange PI which has only 2 GB of RAM

I'd have stuck with ZFS.

[–] Nitrousoxide@lemmy.fmhy.ml 5 points 1 year ago

I'm generally pretty happy with it, though I'd have used podman rather than docker if I were starting now.

[–] thejevans@lemmy.ml 5 points 1 year ago (4 children)

My current homelab is running on a single Dell R720xd with 12x6TB SAS HDDs. I have ESXi as the hypervisor with a pfsense gateway and a trueNAS core vm. It's compact, has lots of redundancy, can run everything I want and more, has IPMI, and ECC RAM. Great, right?

Well, it sucks back about 300w at idle, sounds like a jet engine all the time, and having everything on one machine is fragile as hell.

Not to mention the Aruba Networks switch and Eaton UPS that are also loud.

I had to beg my dad to let it live at his house because no matter what I did: custom fan curves, better c-state management, a custom enclosure with sound isolation and ducting, I could not dump heat fast enough to make it quiet and it was driving me mad.

I'm in the process of doing it better. I'm going to build a small NAS using consumer hardware and big, quiet fans, I have a fanless N6005 box as a gateway, and I'm going to convert my old gaming machine to a hypervisor using proxmox, with each VM managed with either docker-compose, Ansible, or nixOS.

...and I'm now documenting everything.

load more comments (4 replies)

Not accidentally buy a server that takes 2.5 inch hard drives. Currently I'm using some of the ones it came with and 2 WD Red drives that I just have sitting on top of the server with SATA extension cables going down to the server.

[–] exu@feditown.com 4 points 1 year ago

I'd use Terraform and Ansible from the start. I'm slowly migrating my current setup to these tools, but that's obviously harder than starting from scratch. At least I did document everything in some way. That documentation plus state on the server is definitely enough to do this transition.

[–] alteredEnvoy@feddit.ch 3 points 1 year ago

Get a more powerful but quieter device. My 10th gen NUC is loud and sluggish when a mobile client connects.

[–] Still@programming.dev 3 points 1 year ago

I'd put my storage in a proper nas machine rather than having 25tb strewn across 4 boxes

[–] wgs@lemmy.sdf.org 3 points 1 year ago* (last edited 1 year ago) (3 children)

I already have to do it every now and then, because I insisted on buying bare metal servers (at scale way) rather than VMs. These things die very abruptly, and I learnt the hard way how important are backups and config management systems.

If I had to redo EVERYTHING, I would use terraform to provision servers, and go with a "backup, automate and deploy" approach. Documentation would be a plus, but with the config management I feel like I don't need it anymore.

Also I'd encrypt all disks.

[–] vegetaaaaaaa@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

Also I’d encrypt all disks.

What's the point on a rented VPS? The provider can just dump the decryption key from RAM.

bare metal servers (at scale way) rather than VMs. These things die very abruptly

Had this happen to me with two Dedibox (scaleway) servers over a few months (I had backups, no big deal but annoying). wtf do they do with their machines to burn through them at this rate??

[–] wgs@lemmy.sdf.org 4 points 1 year ago (2 children)

I don't know if they can "just" dump the key from RAM on a bare metal server. Nevertheless, it covers my ass when they retire the server after I used it.

And yeah I've had quite a few servers die on me (usually the hard drive). At this point I'm wondering if it isn't scheduled obsolescence to force you into buying their new hardware every now and then. Regardless, I'm slowly moving off scaleway as their support is now mediocre in these cases, and their cheapest servers don't support console access anymore, which means you're bound to using their distro.

load more comments (2 replies)
load more comments (2 replies)

I would spend more time planning and understanding docker. My setup works, but it's kinda messy

[–] apigban@lemmy.dbzer0.com 3 points 1 year ago

I'd make my own nas.

[–] kevin@lemmy.sdf.org 2 points 1 year ago

I'd plan out what machines do what according to their drive sizes rather than finding out the hard way that one of them only has a few GB spare that I used as a mail server. Certainly document what I have going, if my machine Francesco explodes one day it'll take months to remember what was actually running on it.

I'd also not risk years of data on a single SSD drive that just stopped functioning for my "NAS" (its not really a true NAS just a shitty drive with a terabyte) and have a better backup plan

[–] blackstrat@lemmy.fwgx.uk 2 points 1 year ago

I have ended up with 6x 2TB disks, so if I was starting again I'd go 2x10TB and use an IT mode HBA and software RAID 1. I'd also replace my 2x Netgear Switches and 1x basic smart TP-Link switch and go full TP-Link Omada for switching with POE ports on 2 of them - I have an Omada WAP and it's very good. Otherwise I'm pretty happy.

[–] m1st3r2@butts.international 2 points 1 year ago

Not go as HAM on commercial server hardware. iLO is really nice for management though...

load more comments
view more: next ›