A couple of weeks ago we unboxed the QNAP TS-869-Pro to test-drive in our test lab. As we concluded in this article, we quite liked the appliance. It has a solid build quality, the management software is really easy to use and with a bit of knowledge and all the right tools present, you’re up and running in about an hour. Now, that is only half the story, as you also want to know how it performs when you finally connected the appliance to your virtual environment.

First of all, we show you the setup we used to test. Our minilab consists of an HP ML310 server, equipped with an Intel Xeon 3065 CPU, 8 gigs of RAM and a dual port Broadcom Gigabit Ethernet NIC. Both ports are linked to an HP ProCurve 1800-24 Gigabit switch. From this switch, again 2 links are connected to both ports on the TS869-Pro. The settings on both sides make sure VMs see a 2Gbit pipe to the storage.

On the virtual side of things, we tested with both a Windows 7 and a Windows 2008 R2 VM. Technically, there should not be a difference between the two as they use the same kernel. Both VMs have 2 Gigs of RAM. To make testing interesting, we configured our VMware server to use NFS, but we also connected an iSCSI volume of 250 gigs into the VM using Microsoft’s own iSCSI software. To look at performance, we will test both connections.

Before we could install the VMs, we need to copy the ISO images from the installation disks to the NAS so we can use them in vSphere more efficiently. While we were copying the ISO’s, expectations went up quite quickly as the SMB transfer from our 1 Gbit connected laptop actually went up fast. SMB isn’t the most efficient protocol to transfer files with and this action maxed out at about 90 megabytes per second. We think this is quite impressive.

On to the testing then. Both VMs are running on our VMware lab server with 2 Gigs of RAM from an VMDK on an NFS volume. To test, we ran HD-Tune in both VMs where we ran 3 benchmarks on the NFS volume (Root disk) and the same 3 tests on the iSCSI connected volume. So, first of all, the results on NFS from the Windows 7 machine. Here are the statistics that HD-Tune created

 

 

 

 

 

 

 

So, as you can see, access times are 6.8 ms with a transfer rate of 62.7 MB/s while reading randomly from the volume. The IOPS are certainly not bad considering we are only using two SATA disks in this test. Some caching will certainly be in there to create values like that but that is no problem in every day use. When copying data blocks, speed goes up to about 100 MB/s, which we also saw while copying the ISO to the fileshare.

Now, here are the same tests on iSCSI:

 

 

 

 

 

 

 

So, these tests show different values. The average throughput is lower, but so are the access times. And the throughput seems to be more consistent, although the cave-ins we see on NFS could also be a switching problem.

Now, we performed the same tests on a Windows 2008 R2 Server VM. Actually, we didn’t expect to see a big difference in values, as the technical specs of the Windows 7 kernel are identical to the Windows 2008 R2 kernel. Here are the results for the NFS volume

 

 

 

 

 

 

 

So, on NFS the results are not very spectacularly different. The throughput is a bit higher and the access times are a bit lower. Overall you could say that the server edition is a bit more efficient than the workstation edition when it comes to storage. On the the iSCSI statistics:

 

 

 

 

 

 

 

Now, these values show the exact opposite. Here Windows Server seems to be a bit less efficient as access times seem to be higher and throughput is lower. This could be a Windows Server versus Windows Workstation issue or inefficiencies in the iSCSI initiator software. Overall however, these values are still very nice, considering we only have about 160-200 realtime IOPS to spend.

Now, another point about testing is that we test with one VM at a time. That is no real life scenario as you would never have just one virtual machine running in your lab or business. So, we also checked the resources we used on the NAS while we were running our tests. How much CPU and RAM are we consuming when testing 1 VM, that will show how much space is left for others. The results are below:

 

 

 

 

 

 

These values are taken from the internal monitoring program on the QNAP. Our thoughts on that: with one VM pushing things to the limit, there is at least 2/3 of RAM left and more than enough CPU capacity to quadruple the load.

So, concluding, if you max out the disks in the unit to provide IOPS (there are 8 slots available), you should be able to run 16 to 20 VMs with medium load from this NAS without any problems. If the load is even less, you are looking at even more VMs. This will most certainly be enough for any small to medium size SMB or Lab to run their virtual platform on.