Review: Synology Diskstation DS1513+ with VMware – Part 2
In part 1 we finished the hardware installation of the Synology and setup of the DSM software. In this part we will hook the Synology to the VMware vSphere 5.1 Lab environment. The Lab consist of a laptop with 32GB and 2 Quad core E5 Intel CPUs, VMware workstation 8 which runs two virtual VMware vSphere 5.1 ESXi servers and 1 vCenter Server.
Also several other supporting VMs like Windows Server 2012, Windows Server 2008R2 and some clients running Windows XP, 7 and 8 are present.
The Synology DS1513+ can run as a iSCSI target and/or can give out NFS shares. Both can be connected and added to your virtual environment. The Synology is connected through two 1Gbps links with an Apple Airport Extreme with ac availablity. We connect the laptop through wireless where I use an Macbook Air 2013 for support. For the link aggregation test we will use an Cisco WS-3750G switch, because the Airport Extreme doesnt support link aggregation so far I could discover. For connecting a Synology to a VMware environment always use minimal 1Gbps network speed. On the Synology DS1513+ you can combine the 4 LAN ports to a 4x1Gbps channel. Lets setup the Synology for iSCSI now.
iSCSI Setup on the Synology DS1513+
With the Synology you have two options of iSCSI, you may choose block-level or file-level. Where Block-level operates the closest to the RAID level and there for offers greater performance compared with File-level iSCSI, Using a Block-level iSCSI will create a target whose capacity will be equivalent to the size of the RAID Volume.
For greater flexibility use File-level iSCSI where the RAID Volume storage can be shared between regular file sharing duties, and virtual storage space. Since we are going to use the Synology with a VMware ESXi which supports targets greater than 2TB we will choose Block-level iSCSI.
Start the Storage Manager in the DSM web interface on the Synology and choose iSCSI Target and the button Create.
Insert a name and watch the iSCSI Qualified Name (IQN) you will need that one later on when setting up the ESXi server. Create a new iSCSI LUN and choose for a iSCSI LUN Block-level if you are going to use it, like I am, for your virtual environment. An IQN is structured like: iqn.yyyy-mm.domain:device.ID
You will need to create a Disk Group or select a Disk Group if you already created one. Select all disks you want in the disk group and choose the correct redundant RAID level, so you have at least 1-disk redundancy against disk failure. Allocate volume capacity to the Disk Group, remember you can multiple LUNs on it so select all you have in the Disk Group. An summary will be shown for the new iSCSI Target we are creating.
The iSCSI Target will be created and will appear with a status Ready when correctly configured.
On the iSCSI Target tab click the Edit button for making changes to allow multiple sessions from one or more iSCSI initiators to the Synology. This is possible because the VMFS file system is a cluster aware file system which handles locking and such.
Through the iSCSI LUN tab we created a 1TB LUN on the Disk Group so that a datastore can be created on there through ESXi with VMFS5.
Now the Synology part is finished, lets continue with connecting the ESXi servers as iSCSI initiators to the newly created iSCSI target just created on the Synology.
iSCSI Setup on VMware vSphere 5.1
The Lab environment consists of two ESXi 5.1 servers and a vCenter server for management and ofcourse all high availability options like HA, DRS, vMotion and FT. For using iSCSI in VMware you will need to make a VMkernel port which handles iSCSI traffic.
Select the ESXi server you want to configure in vCenter server and on the ESXi server go to the Configuration tab and select Networking. On the right you will see a button called Add Networking… Press that one. A wizard will open and will ask if you want to make a network connection for Virtual Machines or to Add a VMkernel port which handles traffic for e.g. iSCSI.
We will be creating a new standard vSphere vswitch where we will add the physical vmnic1 to support the iSCSI traffic. Give the port group on the vswitch a network label you will recognize easily. You are all set now click Finish to create the needed vswitch with a VMkernel port to handle iSCSI traffic over vmnic1. A vSwitch1 is now created.
Choose the iSCSI Software Adapter vmhba33 and press the Properties button on the right at the middle of the screen. On the Dynamic Discovery tab > Press Add > Configure the IP-Address of the Synology in our case 10.0.1.13 on port 3260.
If dynamic discovery doesn’t work you can add a static discovery but you will need the IQN you wrote down earlier before when setting up the Synology iSCSI Target. After you are finished with the settings do a rescan of the host bus adapter to find the new storage connected to the ESX servers.
Multipath If you have 2 or more network interfaces on your Synology than your Synology supports multipath on the iSCSI Target so you can build and deploy fail-over and load balancing solutions. Especially in combination with VMware vSphere and vCenter Server you can build a great redundant and high performing solution.
When you select the Configuration Tab and Storage Adapters you can highlight the Synology and press the right mouse button, so you have access to the options menu and select Manage Paths from there. We changed the settings of vSwitch0 and the VMkernel port also to handle iSCSI traffic, together with vSwitch1 we have multiple paths to the Synology DS1513+ now. You will see that 1 Nic is handling (I/O) and 1 Nic isnt. Select them both and choose Round Robin (VMware) to make them both active.
All paths are active now and servicing iSCSI traffic in a load balanced fashion. In part 3 of this review we will create some datastores on iSCSI and connect the ESXi servers with NFS to the Synology DS1513+. When running some tests in the background we where moving data to the Synology while it also was playing a Full HD 1080p movie to a Samsung TV. All without spiking or dropping packages. The MacBook Air had a throughput of 800Mbps over the Wifi to the Synology, while the Lab environment was pushing and pulling VMDKs from several datastores. But more about that in the next parts of the review.