Re: [zfs-macos (slightly Ot Recommendations For Jbod 4,3/5 583 reviews

I need to create a new build because my old server (old dell running dual xeon E5-2630's) is too loud and has experienced some overheating issues. My wife requested I put something in that is small and quiet. I want to make sure I can transcode 5-6 shows at a time in real time.

I mostly watch anime in h.264 at 1080p. I was considering buying a QNAP TVS-871-i7-16G figuring that would give me the compute needed for the transcoding I want. However that costs over 2k without any disks! I am looking at 8TB WD Pro Reds.

I will start with 4 disks and move up towards 8 over time. In my quest to stay small, quiet and not break the bank I am now considering FreeBSD solution. The problem is I am unsure how I should spec it and just how much CPU is 'necessary'. This is roughly the build I am looking at right now (about $1200): U-NAS NSC-800 Server Chassis (Has hotswap backplane built in) I have an LSI 9260-8i 6Gb/s PCI-Express 2.0 w/ 512MB onboard memory laying around so I figured I would use that 1U Power Supply (400W) ASRock X99E-ITX/ac LGA 2011-v3 Intel X99 SATA 6Gb/s USB 3.1 USB 3.0 Mini ITX Intel Motherboard Intel Core i7-5820K Haswell-E 6-Core 3.3 GHz LGA 2011-v3 140W 16-32 GB DDR4 So the question is, should I consider different options on the build? Am I over building it?

Can I save cost anywhere? Would I be better served with the QNAP? Do I need a dedicated HBA instead of the controller I already have? Will FreeBSD give disk alerts via email?

I’m not familiar with QNAP’s products so I cannot speak to that but I am familiar with FreeNAS. In looking at your proc, generally the recommended processing power is a passmark of 2000 for a 1080p transcode. That processor has a passmark of over 12000, so you should be good there.

On the ECC: If you want to use FreeNAS, then you’ll be using ZFS. ZFS has integrity guarantees that are not provided by other file systems, but these guarantees are only provided when you use ECC memory. Your processor is not an ECC capable processor so if you want the integrity ZFS promises, you may want to consider switching to ECC. Without, ZFS really isn’t any worse than a file system without ZFS’s guarantees. You can get ECC in Core i. processors but often the Xeon E3/E5 processors are better fits here. Be sure to check out what’s available there.

FreeNAS recommends a minimum of 16G of RAM. Plex and it’s transcoder are not very memory intensive so you likely won’t need more. Increasing the memory will increase the caching that ZFS performs, so your performance will increase with more memory. I personally have 32 in my FreeNAS box. FreeNAS likes to have direct access to the drives, so you’ll want your controller in a passthrough, JBOD, or other such mode. I use IBM M1015 HBAs which use an LSI chipset. I flashed the firmware on mine with the IT firmware which disables all raid functionality and just presents the raw disks to the OS.

You may want to consider similar, assuming it is available on that card. Note if you use hardware raid in FreeNAS, be aware that it is really not the way FreeNAS is intended to be used.

FreeNAS does support running scrubs (data integrity tests) as well as SMART tests on the drives. It will email you the results if you so configure. I highly recommend doing both of these.

I’ve already caught one failing drive before it failed through these tests. Lastly I wanted to mention the expansion of disks. You mentioned starting with 4 disks and moving towards 8. ZFS has a concept of vdevs which is a collection of one or more disks in a mirror, raidz1 (like raid5), raidz2 (like raid6), or raidz3. A pool is the total disk storage available to a file system which is made up of one or more vdevs.

With ZFS, you can create add vdevs to a pool and upgrade the size of disks within a vdev, but you cannot remove a vdev, you cannot change a vdev’s type, and you cannot change the number of disks in a vdev. So, with the above mentioned 4 disks to expand to 8, what kind of expansion were you envisioning there? Gbooker02, thanks for the fabulous feedback. Thank you for the passmark feedback, I had been looking for a metric like that for a while and was not finding it.

Re: Zfs-macos (slightly Ot Recommendations For Jbod Mode

Since the mobo I am looking at supports the Xeon’s of the matching socket type I think I will jump over to a xeon proc and use ECC memory. Thanks for the tip. I am used to hardware raid. I have only used ZFS a few times, but never had to support it myself. I am used to being able to expand out a hardware RAID by adding disks and adjusting the striping across those disks live in my enterprise equipment.

Are you not able to expand a zpool by adding more disks to it in ZFS? You mentioned only having the controller in JBOD or I guess using it like an HBA. I am guessing it will not take advantage of the 512MB onboard cache I have right now if I use it like an HBA.

At this point I started looking into just building a NAS ground up. Said: Do you think I should replace the 9260 RAID controller with a cheaper HBA like the $106 LSI 9211-8i 6Gb/s 8 Port HBA? FreeNAS runs better with it handling the RAID functionality. It also runs faster with more RAM.

Also, there are some Dell HBA’s that are cheaper that also work great. Check out the FreeNAS forums for the ones that work. I use an IBM M1015 (Which is exactly the same as a 9211-8i) Have you decided on a ZFS level?

With 8TB drives you would need at least RAIDZ2 but then I would still be worried about disk failure taking out my data. If one disk failed you would only have one disk left that has parity data. The likelihood of another disk failing whilst you’re resilvering is rather high (And, with an 8TB drive, resilvering could take days) seeing as you probably bought all four drives at the same time. I guess it depends on your backup strategy and whether you could just replace the media files. Said: I am used to hardware raid. I have only used ZFS a few times, but never had to support it myself.

I am used to being able to expand out a hardware RAID by adding disks and adjusting the striping across those disks live in my enterprise equipment. Are you not able to expand a zpool by adding more disks to it in ZFS?

This sounds like you are talking about changing a 4 disk raid5 to an 8 disk raid5. ZFS does not allow this kind of reconfiguration.

The most analogous to what it does allow is to change a 4 disk raid5 to 2x 4 disk raid5 that’s stripped into a raid50. You mentioned only having the controller in JBOD or I guess using it like an HBA. I am guessing it will not take advantage of the 512MB onboard cache I have right now if I use it like an HBA. The onboard cache on these controllers is to mitigate the raid5 write hole (google that for more info).

ZFS, or more specifically raidz, is full copy on write so it does not have this issue. It is also completely transnational from the file system level all the way down to storage, including through the “raid” layer. So it issues a write, and it is issued to all disks. If a power outage or other failure occurs, it checks the transaction’s integrity across all disks, including the parity disks and repairs if it is able or rolls back if not. Because of this transnational nature and the fact it is always copy on write (only ever writes to free space, never updates in place), the need for an onboard cache is eliminated. At this point I started looking into just building a NAS ground up.

This is what I am thinking for a physical build, please let me know if you suggest any adjustments for purposes of a FreeNAS plex server: LSI 9260-8i 6Gb/s PCI-Express 2.0 w/ 512MB onboard memory (because I have one - can replace if it is a waste here) As long as you can configure it to be JBOD only mode and a write goes through to the disk (not just to the cache and it gets marked as completed), this will work. It’s a bit overkill since you are not using its raid functionality, but if the choice is between using this and getting a new card while leaving this unused, you are likely better off using it (assuming you can configure it as I specified, but I have no doubt that you can). Getting back to what is likely going to be the deciding factor for you: ZFS disk expansion. The disk expansion in ZFS does require some advance planning and often doesn’t lend itself to the smaller upgrade increments. So, if you start off with a 4 disk raidz1 (like raid5), you are always going to have that 4 disk raidz1.

You can upgrade individual disks to bigger disks to expand its size, but you cannot change that to a 5 disk or 8 disk or whatever number of disk raidz1. Additionally you cannot remove that 4 disk raidz1 from the pool without destroying it (and the data on that pool).

So migrating off of such would require new storage capable of keeping the data you wish to preserve, copy, and then you can destroy the original freeing those disks up for other use. Myself, I use 6 disk wide raidz2 vdevs. I currently have 2 and with my current usage when expansion time comes, I’m much more likely to replace the 2TB disks in one vdev with larger ones rather than add a third vdev. I would also suggest that you consider dual redundancy (raidz2 or raid6) as Valdhor suggested.

The one time I had a 2TB drive fail, it took nearly a day for the replacement process. If I had only single drive redundancy to begin with and lost another drive in that time of high drive stress, I would have lost all my data. Said: As long as you can configure it to be JBOD only mode and a write goes through to the disk (not just to the cache and it gets marked as completed), this will work. This is true with a caveat. It has been shown by users on the FreeNAS forums that smart data does not get properly passed though with IR firmware.

Unless the card in question supports IT mode I’d get a true HBA to be safe. I would plan your storage needs and build your system accordingly. Plan your vdev/pool arrangement based on your needs now and in the future and if you have to wait a bit and buy all the disks before you build.

I love my FreeNAS but if I had it to over again I would have started with larger disks. Starting with 8TB disks is good but your pool layout and future expansion needs to be taken into consideration before you start.

I think a lot of it depends on how much time you want to spend building/supporting the device versus using it. With the NAS like QNAP it is more of an 'appliance’ than all the care/feeding that goes into the “DIY” type usage. So you have to determine how much your time is worth both initially and in terms of ongoing support. Since you were looking at the QNAP 871, a new model they just released that I think you may also want to consider is the new.

Support for up to 8 Drives in the chassis, 4k support, plenty of CPU for transcoding, support for 10GbE expansion later, as well as m.2 SSD cache/autotiering. Nice all around unit.

Said: This is true with a caveat. It has been shown by users on the FreeNAS forums that smart data does not get properly passed though with IR firmware. Unless the card in question supports IT mode I’d get a true HBA to be safe.

Good point; you do want the smart data passed through to the OS so it can run smart tests and detect imminent failures before they become problematic. I expect there are some IR firmwares that pass smart data in JBOD mode, but these are likely the minority. Said: I think a lot of it depends on how much time you want to spend building/supporting the device versus using it. With the NAS like QNAP it is more of an 'appliance’ than all the care/feeding that goes into the “DIY” type usage. So you have to determine how much your time is worth both initially and in terms of ongoing support.

Re: Zfs-macos (slightly Ot Recommendations For Jbod Enclosure

Another aspect to consider is how much your data is worth. Hardware raid appliances don’t often lose data, but I’ve seen it happen more than once (by reputable brands). ZFS has data integrity guarantees that are unparalleled by any other system, but these guarantees do impose limitations. For example, changing a RAID geometry (expanding a 4 disk wide raid5 - 8 disk wide) is an inherently dangerous operation.

You can also argue that the raid5 write hole is also dangerous. This is why hardware raid controllers have battery backed caches, but if the battery dies (or goes bad) before the dirty data can be committed to disk, this will result in inconsistent writes which could result in data loss. ZFS avoids all of these dangers which is the source of the limitations. Said: I think a lot of it depends on how much time you want to spend building/supporting the device versus using it. With the NAS like QNAP it is more of an 'appliance’ than all the care/feeding that goes into the “DIY” type usage.

So you have to determine how much your time is worth both initially and in terms of ongoing support. Have you actually used FreeNAS? Once you’ve installed it, it’s a web-GUI managed appliance just like QNAP or Synology. I really can’t think of any amount of additional “care” and “feeding” I’ve had to do to it as a result of it being a DIY NAS. In fact, other than the unnecessary updates through major versions I’ve done “just because”, I could’ve pretty much left it as initially set up and it’d still be running fine (if a HDD fails, it’ll email me but that hasn’t happened yet). If one does need support for some reason, the FreeNAS support forums are an aggressively helpful community who go to great lengths to solve your issues/questions, including iXSystems employees who patrol the forum more than anyone else doling out free advice. QNAP’s are great devices for ready to go out of the box setups, but you can definitely go cheaper via DIY and get better specs out of it and better 24/7 up time reliability.

Here is my first crack at building a NAS and so far this thing shows no signs of slowing down. ATM I am dumping 1.5TB of media files to it and it’s not even breaking a sweat! I really appreciate your responses to this. Per some of the recommendations above I decided to do a full 8x8TB RAIDz2. I will also be buying an HBA to be safe. I did end up running into a problem The U-NAS 800 does not have enough physical space for an active cooled CPU.

This means my mobo and CPU don’t fit in the case. I am now looking at the NODE 304 Black (as suggested by ) as well as the ITX-S8 Black Mini-ITX and the SilverStone DS380 Do any of you have experience with these cases? Any suggestions? As for the U-NAS 800, I think I will just set it up as a lower power NAS that I can use for backup.

As a result, I will likely split my disks between two chasis now and just be happy I have a backup (now doing 4x4TB raidz2) and if when I end up expanding it wont be a problem because I will have a backup. Any suggestions on a mobo/cpu that will fit in it? I noticed you were looking at 8x8TB raidz2 for the primary pool and 4x4TB raidz2 for the backup. That means your primary pool will have 48TB of effective storage but you backup will only have 8TB.

Is this what you want? I did no redundancy on my backups, but I have 2 of them (I swap the on-site and off-site backups periodically). This enabled me to use different size disks of which they are a mixture of 2TB and 6TB disks. I have a Norco 4224 and if you use the stock fans, it’ll be loud. Not quite the jet engine takeoff you get from some 1U boxes with those tiny high speed fans, but likely louder than you want.

I bought the 120mm fan wall for my Norco and put in 3x120mm fans there with 2x80mm fans for exhaust out the back. I used Noctua fans all around (including the CPU cooler). The machine is fairly quiet such that the hard drive seeks is much louder than the fans. If you listen for it, you can still hear it but it is not really noticeable. It is quieter than the Antec 300 case with normal Antec fans that I had for my previous server.

(slightly

Said: I noticed you were looking at 8x8TB raidz2 for the primary pool and 4x4TB raidz2 for the backup. That means your primary pool will have 48TB of effective storage but you backup will only have 8TB. That was a typo. I meant that I will take four of the 8TB drives and use for backup and keep the original 4 on the main system. So both should be 4x8TB. This is mainly just because I ended up with un-usable hardware for the main system so I figured I would use some of the disks in a second system for backup. Said: I used Noctua fans all around (including the CPU cooler).

Re: [zfs-macos (slightly Ot Recommendations For Jbod

The machine is fairly quiet such that the hard drive seeks is much louder than the fans. If you listen for it, you can still hear it but it is not really noticeable. Would this be the Noctua fan you are referring to? -DSA-CategoryPages-NA&gclid=Cj0KEQiAsfBBRDMpoOHw4aSq4QBEiQAPm7DL8QRsJpFktxbiiOoQyEWDOSmPx-fyB6nKLbxVlDtMaAjTl8P8HAQ&gclsrc=aw.ds Any knowlege on the noise of these compared to the SilenX fans? Said: my build for my parents: freenas 9.10.1 NORCO ITX-S8 Black Black Mini-ITX Form Computer Storage Case (holds 8 SATA drives hot swap) ASRock C236 WSI Mini ITX Server Motherboard LGA 1151 Intel C236 (8 sata ports on it, supports ECC and xeon procs NOTE this motherboard does NOT support registered ECC RAM, use unbuffered ECC RAM only or it won’t boot!) NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable x2 (cables required to hook drive backpane to motherboard) I decided to buy the Norco ITX-S8 Black Mini-ITX Form Computer Storage Case Thanks for the suggestion!