Thanks @nlippman. This is great. I think I have all I need to get started. Going to start placing my orders this week.
I have an old Synology with lots of bays - 5 built-in and another 5 in an expansion unit I bought a few years later used from eBay. The base unit seemed expensive when I bought it, but it has run flawlessly for about a decade, so I’m a big fan. I didn’t spend lots on drives initially, I’ve just gradually added or replaced them over time. There are some pretty old drives in there alongside some shiny new ones. I get regular reports on the states of the disks and am confident that I can replace one without downtime or loss of data if it starts to show errors, gets too old, runs at too high a temperature or is just too small. I’ve done this several times.
More recently I bought a cheaper second Synology, with just two (larger) drives, and that’s at another location. My main one does regular BTRFS snapshots to the remote one of all the important data. (So I have multiple backups, not just a clone, and in two locations.) I’m not sure if anyone else has mentioned this ability to snapshot and replicate the filesystem, but it’s very valuable and efficient if you want a proper backup.
In addition, I do back up really key stuff to B2. The system comes with a range of included backup apps, generally pretty good, and I use one of those.
On the Synology I store stuff that I don’t have space for on my internal SSD, Time machine backups from our various Macs, or things that I don’t want to trust to a single drive. I also use their Synology Drive software to give me, effectively, my own Dropbox.
Oh, and it does video recording from my security cameras too…
So all this works very well, and though it was initially expensive, I’ve got a lot of mileage from it. The only thing that makes me sometimes question the costs is the power consumption. It’s not at all large when compared with a typical server, but the cost of electricity in the UK at present means I could probably buy quite a lot of cloud storage for the annual electricity bill…
Not sure if it was already mentioned but if not, consider unRAID. It’s $59 one time license (last I checked) and is very low maintenance. It’s really pretty easy to setup and the community is helpful. I have a dedicated PC and use it for file storage/share and also as a Plex server. It can do LOTS more if you want it to. For cloud backup I use a separate service that pulls from my unRAID server.
I went through a number of setups years ago (~2014/15), starting with what I thought would be the easiest, a Synology NAS. I didn’t want to spend many thousands on one so got one of their less expensive models and was shocked at how slow it was. Painfully slow. I eventually ditched that and went with an old laptop running Ubuntu and used that with external storage connected via USB. That worked better, but I am well-versed in linux as it was my primary OS for a lot of years.
Eventually, when I switched from linux as my primary desktop OS to macOS, I bought a one time license to unRAID and set it up on a desktop PC. That was 4 years ago and it has been running flawlessly since with little maintenance - only the occasional update/upgrade which is done quickly via the simple web-based UI.
They’ve since added “unRAID Connect” and other features that enable you to connect to it remotely (i.e. outside your local network) if desired. It has really powerful features like Docker support (that’s what I use to run Plex). Also virtual machines. It’s amazing and I am always surprised how little press/discussion it gets. I suspect because of the overly-techy name.
Other NAS solutions require a lot of up-front costs (like having to purchase identical hard drive models/sizes) whereas unRAID can handle even commodity hardware if you like.
Great suggestion, @mark2741. If someone doesn’t mind getting their feet a little wet in the Linux world and has the hardware available, it could be a great solution. (Obviously I you get a Synology you are in the Linux world, but it is pretty well hidden unless you go looking for it.)
There are small form factor PCs available that are relatively inexpensive that could, with the addition of unRaid, be a good solution here.
I just bought a Synology DS1522+ a couple of weeks ago. Here’s a suggestion based on everything I’ve read recently: consider getting fewer larger drives initially instead of filling up all five bays with 4TB drives.
If you fill up all your bays from the start, your setup is less flexible. Assuming you use SHR and fill up all 5 bays, in future you will need to swap 2 of the drives with larger ones to get an increase in capacity. You can demonstrate this with Synology’s RAID calculator. The 1st larger drive you swap in for one of your smaller drives does nothing to increase your RAID volume size; you have to add a 2nd drive before your RAID capacity expands. Plus, the rebuild time when you swap in another drive is significant. In this case you’d have to rebuild twice before you see any benefit. If you instead start with 2 or 3 larger drives, you can add a 3rd or 4th drive and immediately get a capacity increase.
I bought 3 Seagate IronWolf 12TB drives for my NAS. In practice, the drives are 10.9TB of usable space, so 3 drives gives me 21.8TB of useable space. I get a max of about 120MBs performance out of this, and usually closer to 100MBs - and that’s only if my Mac is on Ethernet. (My NAS is of course on Ethernet, but I only have Ethernet in the my office, not in the rest of the house.)
This has been a great thread! I enjoy reading detailed posts like these.
I wish I thought of this before I ordered yesterday. I could return the drives, but I’ll have a fair bit of space with the 5x4TB drives — enough that I don’t think I’ll need to upgrade for a few years at least.
You have got me mulling it all over, though.
Edit: the 8TB drives were on sale yesterday. Today they are $60 more expensive. I’m more than happy to hang on to the 4TB drives for now.
I got the Synology but haven’t set it up yet. What’s the best way to set it up for the following:
- SHR with one redundancy drive (16TB remaining)
- 6TB Time Machine backup for one machine
- 2TB Time Machine backup for another
- 8TB archive space
I’d like the space to be limited, so the TM backups can never get larger than their allotted space. Is that a bad thing to do with a Synology?
Edit: my sincere apologies, Synology has a very good kbase article that explains exactly this.
The Mac Mini arrives on Monday so I should have everything set up in the next week or so.
I did not know this, and now have to look into it. Thanks for the heads up!
I have finally completed setting this up. Here’s what I’ve done:
- Everything is set up on a Synology DS1522+. I’ve got Time Machine running from a couple machines to the Synology, and I’ve got the remainder of the space set up as archival storage.
- I’ve got a Mac Mini with a hard drive plugged in via USB. I’ve set up Carbon Copy Cloner to clone the archive drive from the Synology to the attached hard drive, and I have Backblaze set up to back it all up with 1 year data retention for extra redundancy.
Thanks to @nlippman for the suggestion on this.
Some notes:
Synology doesn’t seem to be super accurate about disk space usage. When I set up the Time Machine folders and set hard limits on how large those folders could be, it more or less ignored those limits. (i.e. 6TB became a 6.4TB limit, and 2TB is closer to a 2.3TB limit or something.) It also reports that my archive space is using up about 100GB less than Carbon Copy Cloner sees, which is weird. None of this is a big deal; I assume BTRFS just counts differently than HFS and APFS, or something.
Before committing to this setup, I tested both the Synology and the Mac Mini for speed. I didn’t want to commit to two computers if only one (the Mac Mini with a ton of attached storage) would do. My testing is not entirely scientific, and I’m not sure it’s repeatable for you, but it’s based on how my machines will be used in the real world, so it works for me.
To test the speed, I grabbed about 1.5TB of files that vary in size (some small, some large, etc). I migrated them from a hard drive attached to my Mac to the Synology over wifi (again, real-world usage, not ideal testing scenarios). It took about 10 hours to move it all over, and Activity Monitor reported about 30mb/s transfer speed on the network. This is so slow.
Turns out the Mac Mini is much slower. In the same test, rather than 10 hours, the Finder reported this would take anywhere from 7-10 days. Activity Monitor reported 1-7mb/s transfer speed, usually hovering around 3mb/s. This was enough to convince me the Synology, while obviously much slower than DAS, is going to work better for my needs.
Summary: the Synology is 10x faster than the Mac Mini over wifi.
Obviously, this changes when things are plugged in direct, but my assumption would be that, in that case, the Synology would still get more favourable results.
Currently, the Mac Mini and the Synology are both hardwired via Ethernet directly to the router. The Mac Mini is cloning the Synology’s backup to its attached hard drives, and the download speed Activity Monitor is reporting is about 80mb/s (averaging it out). Low of 75, high of 100. Over ethernet, things are clearly much faster.
We don’t have ethernet anywhere in the house (this house was built in 1972), but I have been clearing the way to make it possible to get a hardwired solution for the Studio Display in my office. (I’d love to add it elsewhere, but we don’t have easy ceiling access between floors, so it doesn’t look great.) Looking forward to faster speeds then.
Sorry folks, some other quick thoughts:
- I quickly remembered why I didn’t like using a Mac Mini exclusively for backups the last time I tried it. Using Disk Utility with attached drives on a remote Mac is such a pain. DU doesn’t do anything because the SMB process is “using the drive,” so it can’t disconnect it or anything like that.
- macOS is very good at privacy, but also willy-nilly locks down folders on remote drives. I was told I didn’t have permission to get into some, but not all, of the folders on the hard drive on the Mac Mini. All I did was unplug my existing archive backup and plug it into the Mac Mini, and I know the permissions were all set correctly. And macOS’s suggestions — use the Get Info panel and uncheck “Locked” — was incorrect. “Locked” was already unchecked. So there’s just too much fiddle-faddling to make it useful for me as my only backup solution.
- As soon as I saw how slow the Mac Mini was at over-the-network data transfer, I remembered encountering the same problem last time I tried this. Lesson learned!
Thank you again everybody for your help with this.
For those snooping around at a later date, the app AutoMounter has made my Mac Mini usable once again as a network storage device. I have a 10GbE network to my Mac Studio, so speed is great. That one little app has helped so much in keeping the connection alive. Before I had so many dropouts, it was not worth the hassle.
I’ll also say, I have a UnRaid system that I built. The Mac Mini backs up to that, I run plex on it, and a couple other Docker containers. I do not have that computer exposed to the live internet, therefore, I’m not as concerned with keeping it updated for security. With that said, that UnRaid box has been super consistent. It’s always there, always ready to be written to, and never needs to be rebooted unless I’m doing large scale changes. It’s the most consistent machine I’ve ever run.
Love this tip! Automounter is also a great way, if you’re using laptops, to auto-mount a NAS once you are on wifi or connect to Ethernet.
+1 Automounter.
I use it on my laptop for this exact reason - to keep shares from my Synology mounted.
I will be installing it on my new Mac Mini which clones my Synology shares and sends them to BackBlaze.