7

Iomega IX4 v Openfiler Performance Testing

Running my own home based lab I had struggled to find out which storage solution was going to be the best for me, I had multiple choices with the types of storage I could use (I own the following storage enabled\capable hardware; Buffalo TeraStation Pro 2, Iomega IX4-200d 2TB and HP’s MicroServer running Openfiler 2.3).

Over the last couple of weeks I have been carrying out various tests to see which device I would be using as my NAS\SAN solution and which device would end up being the location to store my Veeam Backups on.

All three devices run software raid, although I am about to try and fit an IBM M1015 SAS\SATA Controller in to my HP MicroServer (with the Advanced Key to allow Raid 5 and 50)) so both the Iomega and HP were similar where raid types were concerned. The Terastation is an already operational device that has existing data on it and could only be tested using NFS, it’s never really been in contention where SAN\NAS devices for ESXi was concerned.

Where I wasn’t sure about was whether I would be better off using RAID’s 0, 5 or 10 (obviously I am aware of the resilience issues with RAID 0 but I do have to consider the performance capacity of it as I do want to run a small VMware View lab here as well), not only was there a decision on the RAID types but also should I go down the iSCSI or NFS route as well.

Having read a number of informative blog and forum posts I knew that to satisfy my own thirst for knowledge I was going to have to perform my own lab testing.

Lab Setup

OS TYPE: Windows XP SP3 VM on ESXi 4.1 using a 40gb thick provisioned disk
CPU Count \ Ram: 1 vCPU, 512MB ram
ESXi HOST: Lenovo TS200, 16GB RAM; 1x X3440 @ 2.5ghz  (a single ESXi 4.1 host with a single running Iometer VM was used during testing).

STORAGE TYPE

Iomega IX4-200d 2TB NAS, 4x 500gb,  JBOD – iSCSI, JBOD – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 5 – iSCSI and finally RAID5 – NFS ** Software RAID only **

Buffalo TeraStation Pro 2, 4 x 1500gb, RAID 5 – NFS (this is an existing storage device with existing data on it so I could only test with NFS and the existing RAID set, the device isn’t iSCSI enabled).

HP MicroServer, 2gb ram, 4 x 1500gb + the original servers 1.6tb disk for the Openfiler OS install, RAID 5 – iSCSI, RAID5 – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 0 – iSCSI and finally RAID 0 – NFS.

Storage Hardware: Software based iSCSI and NFS.

Networking: NetGear TS724T 24 x 1 GB Ethernet switch

Iometer Test Script

To allow for consistent results throughout the testing, the following test criteria were followed:

1, One Windows XP SP3 with Iometer was used to monitor performance across the three platforms.

2, I utilised the Iometer script that can be found via the VMTN Storage Performance thread here, the test script was downloaded from here.

The Iometer script tests the following:-

TEST NAME: Max Throughput-100%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,100,0,0,1,0,0

TEST NAME: RealLife-60%Rand-65%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,65,60,0,1,0,0

TEST NAME: Max Throughput-50%Read

size,% of size,% reads,% random,delay,burst,align,reply

32768,100,50,0,0,1,0,0

TEST NAME: Random-8k-70%Read

size,% of size,% reads,% random,delay,burst,align,reply

8192,100,70,100,0,1,0,0

Two runs for each configuration were performed to consolidate results.

Lab Results

After a long week or so (not only did I have to test each device twice, I also had to move the VM between devices which also took up time) I have come up with the following results.

Iomega IX4-200D Results

Openfiler 2.3 Results

TeraStation Pro II Results

Conclusions

Having looked at the results the overall position is clear, the Iomega IX4-200D is now going to be my Veeam backup destination whilst my HP MicroServer is going to be my centralised storage host for ESXi, I now have to decide whether to go for the R0 or R10 iSCSI approach as they offer the best performance, at this stage I am tempted to go for the Raid 10 approach however because the disks in the server aren’t new. Over the next few months I will see how the reliability of the solution is and take it from there.

One thing I can add however is that over the next couple of days I will be attempting to fit my M1015 RAID controller in there and seeing how that performs, once fitted I will re do the Openfiler tests and post an update.

Simon

7 Comments

  1. Thats an incredibly comprehensive set of results.
    I would be interested to see what happens to the Random 8KB IOPS once you put the dedicated RAID controller in the system.
    Many thanks for the write up.

  2. It seems that the Openfiler has a done pretty well !! I would have thought that the iOMEGA would be faster, with hardware RAID (I think)…

    Excellent tests.

    • These tests were carried out on a mis-aligned disk using software raid only. I am in the process of retesting with an aligned disk and a Lenovo M1015 adapter. What I can tell you however is that there isn’t a great deal of difference. As far as I am aware the IX4 actually utilises software raid rather than hardware raid.

      I am also testing Open-E DSS v6 Lite as well as the current beta’s for OF and FreeNas (support for the M1015 missing from both current flavours of OF and FreeNas).

      Check back in a couple of days (or via my twitter feed) for some new results.

    • The Iomega use software RAID. The discs are paritioned as the OS boots off the HDDs. The 2nd partitions are built into a soft RAID array.

      • Thanks Ivan, I was aware of that fact (having upgraded the disks in one of my units) but it is something I neglected to mention in the article. I will however update that now.

Leave a Reply

Your email address will not be published. Required fields are marked *