Hey all, I’ve been doing a bunch of research on selfhosting the last few weeks as I’d love to lean on more open source projects for my daily productivity & entertainment. My main goal is to backup all my personal documents, photos, and videos (around 1tb so far over ~5 years, so not too demanding) and host a few services to access files on local storage (Immich, Jellyfin) and personal (paperless-ngx, homeassistant, morss). Although I’m not afraid to mess around learning Docker, I’d like to prioritize low maintenance in balance with relatively low long-term cost so that I don’t run into an issue that takes more than a day to restore access to my files/backups. I’d rather save that time for the fun stuff, like endlessly configuring HA automations.
All that said, I figure a decent solution would be to run a local NAS in RAID 6 with a cold storage HDD to swap whenever I transfer a bunch of files from my camera for local backup, and a remote backup at either my parents’ home or maybe eventually on another friend’s NAS. The main thing I’m wondering right now is if a prebuilt NAS (Synology, Asustor, etc.) is worth it in comparison to a custom built system for simple maintenance, reliable and low-bandwidth remote backup and recovery, and solid file sharing options for friends and family? I’ve heard SFTPGo is a great project for file transfers if going custom built, so I’m not completely worried about the last point, but it’d still be a nice bonus to not have to worry about another service.
My greatest fear is having to explain to my parents what a terminal is, so I’d like something reliable with a good price which I can hopefully maintain without crossing that bridge. I know most prebuilt NAS systems aren’t as cost effective or flexible for hosting a bunch of services also, so if I did go with a prebuilt, I would probably pick up a micro PC like a NUC or an old Dell Optiplex to network with the NAS for Immich, and maybe use some internal storage to keep some movies to stream with Jellyfin (unless there’s a limitation I’m not considering). Any advice?
I went down the rabbit hole a while back, I have the space so I went with an old Dell R720 server rack with 24 cores/48 threads, and like 128 gigs of ram for 300 dollars off eBay.
I flashed the raid controller to IT mode using this guide
The perk of going this route is that I can run UnRaid which has an awesome web interface for creating docker containers and content servers.
At the same time you get the ability to add drives over time without having to rebuild your array. I started with a cache, parity, and storage. Over time I have added an additional parity drive and 6 more storage drives.
With this setup and similar you also can use SAS drives. Used helium filled enterprise drives are around 80 dollars for 10tb.
I run a plex server with mostly 4k content, game servers, Wordpress, pihole, media grabbers (aars), seedbox, home nas and countless other containers basically 24/7.
It works incredibly well especially for the price, but it is large. If you have space I highly recommend it. I run mine in an insulated crawlspace lol.
Damn that R720 sounds like a great all-in-one solution. Is the power draw manageable?
Also woah! Helium filled drives? What’s the lifespan/risk on those if they’ve already gotten their lifespan cut short?
My R720xd is fully loaded with 12 HDDs, 2 SFP+ DACs, 2 SSDs, 2 SD cards, 128GB RAM, and 2 of the higher end CPUs available for the platform. Running ESXi with a bunch of VMs including TrueNAS, pfsense, and plex + arr stack, I average at about 250W-320W, and it’s loud as hell.
Average draw would be similar to the other person 200-300 watts under load. And 140 watts idle. My server is really only super loud on boot. The noise levels are a none issue for me.
Most the drives I get I scan the smart data and most have nearly no usage on them. The drives are cheap enough and running parity I am not too worried about data loss. I have been running the server for 2.5 years now and I have yet to lose a drive.
I run these guys and with a ssd cache I really have zero complaints. Like I said my main priority was 4k video and they handle even the largest file steaming without issue. Although I try my best to avoid transcoding and using a shield to minimize that.
In terms of performance and flexibility, building your own is better. Depends on what you want out of it.
If all you want is an easy to setup NAS with no bells and whistles, get a synology. If you want to build a server that also has a NAS, if you want to be in control of the software, build your own.
You don’t even need server hardware. I used an older desktop computer with an HBA card. It’s also less noisy and much smaller.
Buy a nas. You’ll be up and running much quicker. Build a separate server instead. Look for low powered intel NUCs and run portainer or proxmox. Or both. Use rsync or nfs to backup relevant data to the bought nas and use Infrastructure as code/gitops to configure the NUC.
I went this route - Synology NAS and a couple of HP Mini G2 800s running Proxmox for my compute loads. And I would recommend that arrangement for someone just getting started in self-hosting. Get going quickly and safely and put your effort into the cool stuff.
That said, I’ve drunk the ZFS kool-aid and have learned enough along the way to consider moving to TrueNAS or similar on some sort of low power setup in the future. I’m in no hurry.
Jeaaaah I made the mistake of building everything myself. 1.5 years and counting and I have no working environment due to free time constraints.
There are a few people saying that a synology NAS may not do everything you’d ever want, but there’s an underlying assumption there that you should run everything on a single device. There’s value in isolating functions to their dedicated device, especially when the alternative means a guaranteed compromise.
What compromise are you talking about? My NAS runs everything I need just fine, and I don’t think adding another device would improve anything.
The only limiting factors I can think of are performance or memory constraints, but since I don’t use all the services at the same time there is no issue.
single point of failure
That only makes sense when you’re talking about adding redundancy imo, because multiple devices also add more sources of failure. Personally I’d rather have everything failing all at once every 20 years (with backups ofc) than something different breaking all the time.
I have a Synology pre built. Self hosting on it is doable, but I found it very limiting because of all of the packages that don’t exist for its custom distro. Eventually I got a new gaming PC and converted my old one to a most standard Linux distro because of this.
This was back before I knew anything about docker. You could probably get around some of the package limitations by using docker. In fact, I have done this. I am using rsnapshot in a container to backup my server because rsnapshot is not available on Synology.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters ESXi VMWare virtual machine hypervisor NAS Network-Attached Storage NUC Next Unit of Computing brand of Intel small computers PSU Power Supply Unit RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage
7 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.
[Thread #147 for this sub, first seen 19th Sep 2023, 06:55] [FAQ] [Full list] [Contact] [Source code]
I would go custom and use hardware that you can re-configure and re-use in the future. If you pick up a Synology now and wind up feeling restricted by it in 2 years, it might become useless e-waste. If you have anything laying around, put that to use while you’re getting your feet wet - you probably don’t know what hardware configurations you’ll end up wanting in a year, and you don’t want to underbuy/overbuy.
You can also test self-hosting without any real hardware by spinning up a VM and passing in “fake” hard-drives to it. Try setting up a RAID6 in this fashion and see what happens. After you’ve played around enough you can just export all your Docker data etc onto real hardware.
I haven’t used any of the prebuilt things so I’m not sure how user-friendly they are compared to normal solutions, but I’d find it hard to believe that they offer anything truly unique in terms of being accessible for normies. Assuming you’re going to be the only one taking care of the NAS administration, there’s likely an accessible webUI for every public service you want to offer to your friends/family.
I ended up going with a synology Nas as i didn’t need high performance CPU and wanted a turn key solution. For what you get hardware wise, its low value, but if you factor in software and support, it works out to OK value.
You mentioned your parents will be using this. What services are you hoping to host? Outside network access is another rabbit hole.
Check the hardware requirements of the services you plan the host, but from what ir sounds like, you would likely be better served with decent pc 8th gen Intel with the storage in a 4 bay NAS or internal to the PC.
I suggested 8th gen Intel as a min for video transcoding (if needed)
My parents won’t necessarily be using the NAS, I’d just be using some kind of system (maybe even just a raspi) as a remote backup solution with a wireguard tunnel to my local NAS, but if a drive fails, I’d be about 700 miles away to manage it.
If it was a perfect world, I’d like to just ship a new drive to my parents and tell them to unplug the failing one and plug in the new one, then manage the rest automatically/myself remotely, but I assume that’s a pipe dream.
I went with a Synology and have been very happy with it. Easy to use, very nice GUI, yet quite powerful with the features provided.
From there I moved on to NUC. I used to host several things through Docker on the Synology but I’m now moving many of those things to the NUC.
I was using QNAPs NASes for more than 10 years. It was a great product, not anymore. Feature bloat took its toll. It can do a lot but do it badly. So if you go for prebuild avoid QNAP. Build your own.
If you enjoy researching, tinkering and customizing everything exactly how you envision it then build a custom one. If you “just” want to use the thing and run some docker containers then buy a NAS. From what you wrote I think a NAS is what you are looking for, especially the low maintenance part. Just make sure it’s not the most basic one, so it actually has the power to run what you need.
The one great thing about Synology NAS is that most things are right there in the UI or package center. You can just install them without researching 100 different alternatives, and configure them in the UI instead of config files. What’s not there can be installed just like on a custom server, because it is just a regular server after all. You also get good customer support if something doesn’t work, especially useful when you’re not as knowledgeable in everything yet.
If power usage and/or noise are concerns, I would steer clear of enterprise gear.
I started out with a Synology NAS, which died and took my data with it because of their proprietary software raid. I think you don’t need to worry about that these days, but I haven’t looked into it much. I haven’t gone back to a pre built NAS since.
Currently, my production setup consists of a Dell R720xd that runs pretty much everything, and a Dell R710 that runs as a backup TrueNAS server. It’s loud, sucks back about 550W, produces a ton of heat, and takes up a good deal of space when you add in the rack mount switch and ups. I just moved pretty far, and I decided to move my homelab to my dad’s house instead of taking it with me.
My plan is to migrate to a more reasonable setup incrementally. I’m currently building a proxmox ve host out of my old gaming PC (ryzen 2700x + gtx1060). I added 2x 10TB drives, made a mirrored zfs pool, and I’m running an openmediavault VM to share it on the network. I have another VM for home assistant, another for matrix/jitsi/etherpad, another for jellyfin/arr stack/sabnzbd with the GPU passed through for transcoding, another for swag/paperless-ngx/immich, and a final one for the MASH Ansible playbook. And I have a small fanless AliExpress PC running pfsense as a router/gateway.
The “ideal” final setup is to basically build another machine to put TrueNAS onto that will replace my openmediavault setup. I’m aiming for total average power draw to be under 100W.
My suggestion given my experience with different hardware is to scrap together whatever you can for cheap, run proxmox with openmediavault, and build the VMs for services whose data you don’t care much about first, then build a dedicated NAS running TrueNAS. The NAS doesn’t have to be fancy. It doesn’t need ECC ram. You could probably build a competent, compact NAS for about $400 without HDDs. Once you have the NAS, then build out services like NextCloud, immich, and paperless-ngx, where losing the data would suck. And then think about a backup solution for that data.