I don't know if I can claim a spare hard drive hooked up to a Raspberry Pi as a NAS, but it's what I have - and it works quite well for my single-user use case.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I’ve got a Synology 918+ with 16TB in raid 10.
Of the synology software, I regularly use: Photos (photo backup and organization tool), Drive (a private “cloud” sync like Dropbox), the contacts and calendar services, and surveillance station, their security camera monitor/recorder. Via Docker, I also run dokuwiki, gitea, draw.io, minio, postgres, freshrss, firefly3, calibre, and a few others. Like others, Time Machine backups of laptops and backups of non-apple hardware use a lot of the space.
I also have my older Synology 213 running still just as a place to backup important stuff from the primary.
On Reddit, we had a r/selfhosted. Do we have something like that here, in the new frontier?
There's !selfhost@lemmy.ml (and apparently also !selfhosted@lemmy.ml and !selfhosting@slrpnk.net which popped up in the autocompleter).
Thank you! When I tried looking for them last night, I couldn't find anything, so this is very much appreciated!
I'd like to build a NAS. Does anyone have a simple guide I could follow? I do have experience building my personal computers. I could search online for a guide, but a lot of the time small communities like this will have the end-all be-all guide that isn't well known.
I don't have one off hand but a NAS at homelab level is not that different from a server.
I have had success with getting a second hand server with a moderately powerful processor (old i5 maybe?), a good 1/10Gb network card (which can be set up with bonding if you have multiple ports), and lots of SATA ports or a raid card (need PCI slots for the cards as well).
I would go with even a lower power processor for power savings if that's a thing. ECC ram would be great, especially for ZFS/btrfs/xfs.
I used a Rasberry Pi 4 to make a pi-hole ad blocker. I then learned you could just plug in a harddrive to it (USB) to make a very simple NAS. I bought an SSD to USB case/adapter, and with basic tutorials online I now have a network drive. To reiterate, I have no programming skills. Both the pi-hole and now the NAS were from copy-paste command line walkthroughs.
It's not a fancy "NAS" as far as redundancy or backup, but now all 4 of the gaming PCs (wife and kids have their own) can connect to the same drive for sharing stuff. I also use it to manually back up all our photos/videos. I love it.
I've got a 'NAS' setup on my desktop computer/server. I use it for almost everything. It runs VMs and games and self-hosted servers, etc, etc. It is Arch Linux but does it all. Plex/Sonarr/Radarr/QBittorrent.
24 TB of HDD in raid 10.
I haven't found a good reason to keep a separate computer/server. It pretty much just always complicates the setup. If I need more separation, a VM is usually a better answer in most cases as far as I can see.
Desktop PC running proxmox with a bunch of VM's. Mostly focused around hosting Plex but some other stuff as well. Below are some of my VM's. All are running Ubuntu server btw.
- HDDs get passed into this VM which uses mergerfs to pool them all together. Then I'm running an NFS server to share the drives with the other VM's that need access.
- Torrent client, sonarr, radarr, etc. To automatically acquire content.
- Plex VM
- Gaming servers (hosts Minecraft, valheim, etc servers)
- externally exposed nginx instance, hosts sites such as overseer.
- internally exposed nginx instance, allows for https access to all internal services (sonarr, radarr, flood, etc).
I have an older R710 running TrueNAS right now with 1tb of (usable) flash storage and 10Gb connection the to the rest of my lab.
I have another TrueNAS instance running as a Proxmox VM with a Lenovo SA120 DAS attached to it, which has 2x 10Tb drives in mirror mode for mass storage. It's also technically connected via 10Gb to the rest of my lab.
It's something I always wanted to setup for my personal files, docs, media etc. but get dissuaded once I see synology costs, hard drive requirements, RAID setup options and just generally power draw / heat&noise generation. Looking forward to answers here, I'd be very happy to get off cloud storage but not if it's a second job maintaining and setting it up
I'm using a Synology setup. I thought I'd grab an off the self option as I have a habit of going down rabbit holes with DIY projects. It's working well, doing a one-way mirror off my local storage with nightly backups from the NAS to a cloud server.
I use synology. I’ve done freenas, openfiler, even just straight zfs/Linux/smb/iscsi on Ubuntu and others. Synology works well and is quite easy to setup. I let the nas do file storage. And tie other computers to it (namely sff dell machines) to do the other stuff, like Pi-hole or plex. Storage is shared from the nas via cifs/smb or iscsi.
Synology also has one of the best backups for home use imho with Active Backup for Business. It can do vmware, windows, max, Linux etc. I actually have an older second nas for that alone. But you can do it all in one easily.
I've just been using an old laptop with jellyfin, radarr, sonarr and transmission.
Custom low power build:
- Case: some old 2U Supermicro case with 6 HDD bays that got thrown out at work
- Mainboard: ASRock J4105M Micro ATX
- CPU: Intel Celeron J4105
- RAM: 8 GB DDR4 (CPU doesn't support more)
- RAID controller: LSI 9212-4i
- System SSDs: 2x 128 GB Intenso 2.5" SATA SSD (mounted into the first two bays with 3D-printed 2.5" to 3.5" adapters)
- Data HDDs: 2x Seagate Ironwolf 4TB, 2x Seagate Exos X16 14TB, combined into an 18 TB zfs pool
- PSU: PicoPSU
My main goal was to build a 4 HDD NAS that can run at very low power and without active cooling most of the time (because it sits under my desk) but can spin up fans if needed.
On the software side I run Ubuntu 22.04, docker and Jellyfin as a media server. The J4105 provides Intel Arc graphics for video encoding.
Theres plenty of replies with options of decent, current NAS setups - so I'll reply with my 1st NAS instead...
You could start with a Pi-NAS to save a lot of $$coin$$... start with a Raspberry Pi 4 8GB; it has gigabit ethernet, so it meets that baseline... since you'll be running over the USB-3 BUS regardless, you can get away with buying cheap USB drives; there are many brands, but Western Digitals are pretty cheap... they go up to like 40GB now a days, but 4TB drives are only $100 or so... I went with two 8TB drives. Its better, IMO, to go with the larger 3.5" versions because they come with external power supplies. I found with the smaller 2.5" drives, the Pi could only power one sucking power over USB...
I used no RAID, as you have to jump thru a few extra hoops to get RAID setup over drives on the USB-3 bus... backup was done thru my Proxmox PBS server - but we're not here for the safe backup talk, right?
All this was running OpenMediaVault, which is a pretty decent NAS software. It has support for all the connection types you want - and believe it or not, I also ran Plex in docker and got decent results; while I wasn't able to do any transcoding, wireless playback worked quick enough for me - and I could even watch movies remotely...
I mention this setup b/c a 16TB Pi-NAS can be had for $300, all in... you can see speeds of 100MB/s but I found 40-50MB/s was an average because of WiFi or other bottlenecks.
Its cool to have options when building a NAS; I've since moved my NAS to a Proxmox VM on my Dell Poweredge server, but the Pi-NAS ran without fail for four years...
- pAULIE42o
- . . . . . . . . . . .
- /s
I have a synology NAS, two bays with 4TB in raid.
Mostly used for Plex (Netflix alternative and for music streaming) but also find these useful:
Vaultwarden (password manager) Virtual machine (for torrenting) Sonarr (torrent indexer) Radarr (torrent indexer) Synology Photos (photo backup) Synology Drive (personal cloud storage) Joplin (notebook)
Probably some other stuff as well. I highly recommend mariushosting.com if you have or end up using a synology NAS. Amazing tutorials for just about anything.
I've got a HP DL360 g9 running Ubuntu server lts and ZFS on Linux with 8× 1.2tb 10k disks, and an external enclosure (connected by external SAS) with 8× 2tb (3.5" sata) disks. The 1.2tb disks are in a ZFS raid10 array which has all our personal and shared documents, photos, etc The 2tb disks are in a raidz6 and stores larger files.
It uses a stupid amount of power though (mainly the 10k disks) so it's going to be replaced this year with something newer, not sure what that will look like yet.
I use mine for file storage, Pi-Hole, and Jellyfin mostly.
Computer with Ubuntu Server, with a Ryzen APU (3400g), 16GB DDR4 RAM, and 2 x 4TB WD Red CMR Drives.
Use it as a media server for Jellyfin, and also as a file server using NFS. Works super awesome and I wish I had done this sooner
Currently running an R710 in RAID6 with 32TB usable, but between the data on plex and backups of things in the rack I'm low on space.
I'm looking at getting 8 Odroid HC4s and some referbed 20TB drives to build a Glusterfs cluster that will host all of my VM disks and backups. At least with that I'll have 80-120TB depending on how much fault tolerance I have. Because they have two HDD slots I can double my storage when it gets low and just add more boards to expand the array when I'm tight for space again.
I don't have any experience with the Odroid HC4, but I used to have an N2 and while I am sympathetic towards Odroid I can't help but feel their software/firmware support is lacking. I always had issues with the GPU driver and there was either a hardware or firmware fault with the USB controller which lead to random access errors.
Oh I'm not going to use the trash OS Odroid supplies. I'm going to use Armbian which is much more stable and has better support for the tooling I want to use
I use an old computer that is reasonably energy efficient and have Unraid. 3 NAS hard drives, 1 SSD for cache, a USB for the OS, and that's it.
Using the dockers on inbuilt app store, I have Filebrowser, Syncthing, Qbittorrent, Mumble as the most used. Filebrowser faces outward and is a simple replacement for Nextcloud.
I'm not technologically minded but it wasn't hard to setup or use. It mostly stores backups for photos.
I have a mini Thinkcentre. I used to use TrueNas scale, but switched to Ubuntu Server due to having tons of issues.
I run jellyfin, radar/sonarr, maybe a Minecraft server and a few other things.
I got a DS920+ been using it for file storage, backups, plex, and running docker for all my arr's. Really like synology as an entry level, it got me to dig deeper and learn more. I'm behind a CGNAT, so setting up a VPN solution that would work was a pain on DSM. In the process of setting up my own homelab and building a truenas as I learn more about ZFS.
Mine currently runs on an old pi3 with an external hard drive plugged in via a powered usb hub. I'm using openmediavault at the moment, but I'm probably going to swap it over to just NFS when I get the chance. I'm also planning to swap out the single external drive for 4 drives in a soft RAID through LVM.
I'm lucky enough to have a Kobol Helios64, but unfortunately the small company that made these shut down. It's fine for the time being but I'm going to have to pay attention to the NAS market to be ready to replace it one day... my main goal is low power, so I'm not sure if it's worth it to go to a more commercial option like Synology or if I should be building something.
As appliance NAS tend to be, the actual SBC in the Helios64 is pretty slow so I minimize what I run on it. It does have Plex server, but most everything else runs on another ARM machine that mounts from it by SMB.
@Gaywallet @technology I have a synology and I use it only for NAS type things and run minio on it via docker. It's been up and running fine for about 8 years but now that I want to upgrade and add things like 2.5g ethernet, it is a pain. My upgrade path is getting a SFF case and building my own NAS with off the shelf components. It should be rock solid with FreeNAS/TrueNAS/UnRaid and easy to upgrade and tinker with over time.
I have a $60 ASRock mobo + Intel Celeron quad core combo. Stuck 16gb of Ram in it and two sata controllers. I think ran was the most expensive and the computer now costs $150 without the drivers. I have x8 4TB drives and a 60GB ssd running true nas. That gives me 24TB space and two redundant drives for failure tolerance. From there I run jails (FreeBSD containers) for NextCloud, MiniDLNA, and transmission.
I bought a 2 bay ds220+. 2 x 4TB drives. Been happy with it so far. I got Jellyfin on here and use Synology Photos and Drive to back up stuff. I also use Adguard home, this has been amazing and has blocked many weird microsoft and amazon pings. Yes, it's proprietary but when I was building it, it seemed to be a decent choice and had lots of support. As I get more experience, I will probably build my own NAS.
I built a massive overkill NAS with the intention of turning it into a full blown home server. That fizzled out after a while (partially because the setup I went with didn't have GPU power options on the server PSUs, and fenangling an ATX PSU in there was too sketchy for me), so now it's a power hog that just holds files. I just turn it on to use the files, then flip it back off to save on its ridiculous idle power costs.
In hindsight I'd have gone with a lighter motherboard/CPU combo and kept the server grade stuff for a separate unit. The NAS doesn't need more than a beefy NIC and a SAS drive controller, and those are only x8 PCIE slots at most.
Also I use TrueNAS scale, more work to set up than UNRAID but the ZFS architecture seemed too good to ignore.
After running two different Synology units at home (years ago) I finally just moved everything to a mix of USB SSDs and cloud storage. I really wasn't using many of the more advanced NAS features very often. They were expensive toys if I'm being honest with myself.
I needed lots of storage, primarily, and there are cheaper options for that now. SSDs are also far more reliable than they used to be. I keep the "large but not critical" files on SSD and the "critical" files in the cloud.
Not trying to discourage anyone from running their own NAS. It can do more than just store files for you, and they are fun to mess around with. But if all you really need is reliable storage, shop around for other solutions would be my advice.
Yes, SSDs are definitely a viable option now
Assembled a server with Supermicro X9DRL-IF board and an old case. I figured I wanted ecc to safeguard against corruption on top of using btrfs. Harvested old laptop drives to build out a decent sized storage array for my purposes. Rationale overall was: cheaper to assemble than a purpose built NAS that had ecc and moderate compute power, more storage expansion options.
99% of the function has been serving samba/sftp shares. Started using nextcloud w/ memories to sync and tag photos from mobile. Standardized everything I run to opensuse tumbleweed/microOS.
I had a 24 bay SAS racked external system piped into a racked server I had for many years. Experimented with tons of filesystems (played with ZFS so much), had a gazillion hard drives spinning, and ran Plex. I learned a ton from the experience, and lost many drives over the years (average 2 a year. I was rough on them.)
Nowdays, an intel NUC with an external hard drive. Using jellyfin and *arrs for the obvious, then a small kubernetes cluster for learning.
Used a QNAP for a bit when I was in a motorhome full-time, but found it wasn't powerful enough for more than filesharing and went to the NUC instead.
I've got a Synology with 2x 4TB as main storage for backups, fotos and time machine for my macs.
Otherwise I run a mini pc (dell optiplex i5 8xxx, 32GB RAM, 2 TB SSDs) with proxmox hosting multiple VMs for pihole, home assistant, zigbee2mqtt, nodered and some other stuff, partly on docker, partly on VM.
And lastly I run my old gaming pc as a development / tinker server with proxmox, currently playing around with bare metal Kubernetes and some other stuff.
edit: updated specs of mini pc
I have one Synology DS220+ at my parents house which I use for offsite backup of the computers and phones mostly. But it also runs Synology Photos which does face recognition which I then use to once an hour to fetch 20 pictures of pictures with the family on to show on the TV in the living room. I have pictures there dating back to 2004 which makes it a very cool walk down the memory lane. And I can use the TV as a big interactive picture frame when nobody is watching anything but we're at home.
With a similar script I also post pictures like "Today n years ago" in two of my family chats which is even cooler because there my siblings and parents also can one picture from n years ago every day in the morning when they wake up.
Here is the script, the bad part is that it has to have access to the postgresql database on the NAS, so it's a bit tricky to set up but once it runs it's awesome! https://github.com/jeena/synology-pictures
I run everything on a lean Ubuntu server install. My Ansible playbooks then take over and set up ZFS and docker. All of my hosted services are in docker, and their data and configs are contained, regularly snapshotted, and backed up in ZFS.
I run basically all of the Arr stack, Plex (more friendly to my less tech savvy family then my preferred solution Jellyfin), HAss, Frigate NVR, Obsidian LiveSync, a few Minecraft worlds, Docspell, Tandoor recipes, gitea, Nextcloud, FoundryVTT, an internet radio station, syncthing, Wireguard, ntfy, calibre, Wallabag, Navidrome, and a few pet projects.
I also store or backup all of the important family documents and photos, though I haven't implemented Immich just yet, waiting for a few features and a little more development maturity.
About 30TB usable right now.
I used to have a NAS, ended up moving towards a mini server instead. The flexibility has been really worth it for me, and I run a JBOD enclosure for extra disks so that I can handle backups and media files.
I'm running an ASRock Deskmini. It has a variety of flaws, but it works surprisingly well and has been basically stable. It's tiny, and it can get a decent heatsink and fan upgrade on top. I'm running a Ryzen CPU in it. It runs all of my self-hosting stuff.
Using an old Netgear Readynas R102 with 4.5 TB of usable storage in RAID 0.
I used to run all kinds of services on the nas itself via the ssh access, but I've since moved those to separate raspberry pis. The pis use the nas as a networked storage.
I run a webserver, music server, matrix server and torrent client seeding ubuntu images.
I want to make a storage cluster using Ceph in the future, but I've not found any suitable small computers that I could use with that.
I've got an old office PC running esxi with truenas core virtualized. Runs solidly but I wish I'd used Proxmox for my hypervisor.
I built my own. All drives are used for storage of data. I use a decommissioned server power supply. 2 PCIe expansion cards for connectivity. My case is a large plastic container with lid from a dollar store. Regardless of its content, each drive is encrypted, and each has its own mirror at a remote site. Every Drive regardless of location, has its own on-off switch. Cheap, somewhat primitive but secure.
I built my own, use XigmaNAS and RaidZ3 across 15 2TB disks for 22TB usable. Backup is now my main PITA - I lost a previous array with RaidZ2 by losing 3 disks trying to rebuild, so while I should have known better, RAID is not a backup. However, finding large external disks with any sort of reliability seems hard - so many bad reviews of 12TB etc disks. For now, I'm actually only using about 3.2TB and so can backup to jottacloud, but it's slow with my internet. I think the first full backup tool 3 months or something. Luckily I use something that is all incremental after that, but I probably actually need to set up a 20TB disk or something to just make full copies over to occasionally.
No dedicated NAS. I have a main Linux system that's always-on for other purposes so that also serves as main storage. Remote access is entirely via ssh-based methods: sshfs, TRAMP in Emacs, git, occasional copying stuff around.