Homebrew SAN

Homebrew SAN

Everybody and their aunt has a NAS at home, but what about something with a bit more pizzazz? How about if I build a system out of standard, off the shelf, ‘surplus sale’ gear and spin it into a really neat storage appliance?

The Gear

At Fanshawe College, the ‘Asset Sale’ is a proud tradition. IT students line up around the block to get good deals on retired and scratch-and-dent electronics. I compare it heavily to the types of auction sales that happen when an old farmer retires; it’s not just an opportunity to buy some used gear, it’s a community event. In 2017 I managed to walk away with a retired lab workstation, a Dell Precision T3500. With 12 GiB of memory and a couple crevices to squeeze hard drives into, I was off to the races.

The storage machine was built atop the FreeNAS platform to leverage the ZFS file system and the robust replication and backup features that it gives. With memory in excess, I originally planned on using the deduplication feature, though after a test found that I had minimal gains on my particular dataset.

The Network

A key piece of a storage system networking capability. I opted to… gasp Buy new! As it turns out, the MikroTik CSS-326 series switch is hard to beat for home networking. Not only is it cheap, low power, and fairly reliable; it’s dead silent. Using only 19W at full power, it was question of buying back my sanity (after running a Dell PowerConnect 5448 with three 9000 RPM fans). When considering TCO it’s great as well, over the course of its lifetime it will easily save me the $150 it cost in energy savings. Not bad!

Sticking it together

On the other end of the fire hose is the virtual host. In my case, it’s a HP ML150G6 server that was trash-picked by my colleague ‘Dave’ and wrestled into the back of my Jetta back in 2016. After about $40 in upgrades (a second quad-core CPU, a RAM upgrade, and a dual network interface), it was ready to go! On a topographical layout, the system looks like this:

+-----+                    +------+                        +----+
| SAN |e0 -- MGMT ----- e8 |Switch| e7 -- Dot1Q Trunk -- e0| HV |
|     |e1 -- VLAN 91 -- e21|CSS326|e22 -- VLAN 91 ------ e1| 01 |
|     |e2 -- VLAN 92 -- e23|      |e23 -- VLAN 92 ------ e2|    |
+-----+                    +------+                        +----+
                            /   \
            <-- WAN -- e1 -+     +- e9-20 -- Internal Network-->

iSCSI traffic travels between the SAN and the hypervisor over VLANs 91 and 92. Both have a dedicated port on a dedicated VLAN for performance and reliability, but also so that someday when more hardware falls into my hands I will have the ability to add a second server to the cluster and have failover and HA capabilities. Neat!

Now the important part of this configuration is to set up iSCSI to make use of MPIO, or Multipath-IO. This means that rather than just using a single network interface and VLAN, the server will load balance its traffic between links. In theory, this doubles my throughput to the SAN. In reality, it depends heavily on the workload. Since the SAN is using mirrored vdevs (essentially RAID 10), some streaming operations will go above gigabit speed. Other workloads that are easily cached in ARC (memory) will have good performance as well, in particular bursts of async writes.


I’m quite satisfied with the system I’ve set up! Considering all the gear combined cost around $300 CAD, I’ve made a very capable system on pocket change.