Design Decisions

I said my next post would likely be after I installed the next round of parts, but there are a few things I decided that I want to note before that happens.

Deciding on a storage configuration for the server has ended up requiring far more thought than I initially thought it would. Originally I had thought that I’d use a single RAID card to create a RAID-1 array of two SSDs, and a second RAID-6 array for four NAS-class hard drives. The more I considered this, the more I decided that I didn’t like it.

The first thing that bothered me was the idea of using the RAID-6 array for storage. In order to make that storage available at the network level I’d have to run a VM with some server OS. The idea of using some server OS as a file server isn’t appealing to me. I imagine it being a bear to configure the various protocols and never quite working as well as I’d want it to. I bounced these thoughts off of a friend and they agreed that there was probably a better approach.

At this point I believe the best approach for my lab will be to virtualize FreeNAS and use it to manage the storage. This will mean flashing the RAID card into IT mode for use as an HBA and then using VT-d to let vSphere pass the card directly through to FreeNAS. There are plenty of people around the net who tell folks to stay away from virtualizing FreeNAS, but it can be done well if you pay attention to the details. (See this and this.) Details are one thing I’m good at.

This presents another problem, however. If I’m using the RAID card as an HBA to support FreeNAS and the four hard drives I plan to use for bulk storage, it means I can’t attach two SSDs to it for use as a RAID-1 drive. I actually think that’s okay though. RAID-1 would mean continued uptime should one of the SSDs fail, but this is a homelab, not a large production environment. I can handle a little downtime. I had also considered adding a second RAID card just to support the planned RAID-1 SSD array, but the TS140 doesn’t have a second 16x or 8x PCI-E slot.

Instead I’ve decided to use my two 2.5″ drive slots (which I created in the TS140 by using this bracket in the empty floppy bay) to host the SSD the VMs will run from as well as 4TB 2.5″ spinning drive that will be used for storage of VM backup files. I think this is workable solution in my situation. If I’ve got a couple of months of good backups on that drive when the SSD fails, then I’ll only be down for 2-3 days until I can get a new SSD in place. And if I copy the most recent backups to an off-site location via the network I should be safe from the spinning drive failing.

This has the added bonus of letting me get up and running in a development mode faster than I had originally planned. I already have the SSD for VMs installed. (Hooray for Christmas gifts!) The RAM, backup drive,  four-port Intel gigabit NIC, and a UPS are all on the way to me as I type this. Once those arrive I should be able to boot the TS140 for the first time. I also just recently found out that I’m licensed for five vSphere Enterprise licenses through my employer, so I’ll be able to start configuring VMs right away. I’ll consider this Phase 1.

Which means that the project will now have a second and third phase. Phase 2 will involve getting the network gear in place to support my networking goals for the project. Those goals are: Ditching my Google Fiber Box for a virtualized pfSense instance, as well configuring six VLANs; four of which will get their own WiFi SSIDs. Phase 3 will be getting the bulk storage drives and HBA installed as well as getting the virtual FreeNAS instance up and running.