VMware View sizing & best practices
November 4th we published an article on Virtual Infrastructure best practices and the response was overwhelming. During the last month we received a lot of questions regarding best practices on VDI/VMware View. When I then read a comment from VMware’s evangelist, Richard Garsthagen, that the attention on blogs for VMware View was minimal I thought well let’s extend our View articles/knowledge base.
So, VMware View best practices. First of all check the article on Virtual Infrastructure best practices to create a good understanding for the underlying virtual infrastructure challenges.
So hereby my list of best practices which I gather from VMware KB articles, instructor led VMware View design training and the VMware community:
- CPU sizing;
- Memory sizing;
- Storage sizing;
- Network sizing.
If you have additions or new insights please reply.
CPU Sizing Depending on your application workload you can deploy 6 to 9 virtual desktops per CPU core. On a dual quad core ESX host this means we can deploy 48 to 72 virtual desktops. With VMware vSphere, the more efficient hyper-threading and the new CPU’s with MMU/RVI/EPT/etc like the Intel Nehalem CPU’s, we can, on average, deploy 50% more virtual desktops. This means that depending on the workload you can deploy between 72 and 108 virtual desktops. I always like to keep the sizing figures safe and go with the 72 virtual desktops per dual quad core ESX host.
Memory sizing Again, depending on your application workload a Windows XP virtual desktop needs 700MB to 1GB of RAM. For a Windows 7 desktop we can easily double those figures. With VMware ESX’s Transparent Page Sharing (TPS) this means that roughly 60% of the memory for a Windows XP desktop is actually used and 40% is shared. With Windows 7 this number drops to 20% because of encrypting it’s memory which results in 80% memory usage. Besides that a ESX hosts needs 4GB of memory for it’s own needs.
With the virtual desktops we calculated with the CPU sizing this results in 48GB memory for 72 Windows XP virtual desktops with 1GB memory. For 72 Windows 7 desktops with 2GB this boils down to 116GB host memory.
Storage sizing The sizing of your storage needs for VMware View is all about performance and has nothing to do with storage capacity. Storage is calculated by the amount of I/O operations per second (IOPS) needed. Per virtual desktop you need 5 to 20 IOPS depending on your application workload and operating system in a 20/80 read/write ratio.
Again, for the 72 virtual desktops calculated earlier, we need 360 IOPS for task workers running for instance Windows XP and 1440 IOPS for power users running Windows 7.
We have the amount of IOPS and the read/write ratio it is in, now it’s time to determine RAID level and LUN sizing. As I said earlier, storage sizing is all about performance and not storage capacity and with the high number of IOPS needed you will probably have plenty of storage capacity to store all virtual machines and data. I always pick the RAID level which gives you the best of both worlds, high performance with a nice margin to the values calculated and not too much overhead so there’s enough storage capacity left.
LUN sizing depends on the deployment method used, full virtual machines or linked clones. With full virtual machines you can place between 25 and 32 virtual desktops on a single LUN. With linked clones the maximum virtual desktops per LUN is 64.
The storage capacity needed with linked clones is a difficult item to size because it depends on many many factors like client anti-virus (updates), persistent or non-persistent desktops, application distribution method, etc. Numbers we see in real life scenarios with our customers is that when they use non-persistent desktop which are deleted after use, application deployment is done using application virtualization and all software in the template is up-to-date, so no updates or patches, the deltas are between 10 and 25% of the base image.
Assuming a disk image of 10GB for a Windows XP virtual desktop and 20GB for a Windows 7 one and the above scenario with a User Data Disk (UDD), which keeps user data out of the virtual machine storage space, you would need:
- 3 LUNs of 300GB for 72 Windows XP full virtual desktops with 20% free space;
- 2 LUNs of 108-240GB for 72 Windows XP linked virtual desktop with 20% free space;
- 3 LUNs of 600GB for 72 Windows 7 full virtual desktops with 20% free space;
- 2 LUNs of 220-470GB for 72 Windows 7 linked virtual desktops with 20% free space.
And now an area I’m not so familiar with, you can use linked clones to reduce the amount of storage capacity needed but there are SANs which have this feature build in, data deduplication.
So my question to you storage experts out there: ‘Is linked clones a ‘poor man’s’ data deduplication?’ and are they interchangeable or does VMware View linked clones offer us more?‘.
My limited view now says, let VMware do what it is best in and let the storage vendors do what they’re best in. So VMware View creates full virtual machines and the storage deduplicates the data. That’s the road VMware chose with their new storage API’s and that the future in my opinion. Your opinion, please!
Network sizing Network sizing has always been a bit more difficult and now with VMware View 4 with PCoIP it has not become any easier. During sizing in a View 3 environment I always used the following rule of thumb, 50kbps for a task worker, 100 kbps for a knowledge worker and 150kbps for a power user. This number probably still stands for VMware View 4 using RDP but with PCoIP these numbers are quite different. From what I’ve heard from VMware engineers, PCoIP starts at 128kbps per session and goes up real fast when using all nice graphical features like flash, HD video, etc. With the use of PCoIP the latency may not exceed 250ms.
So with RDP, we would need between 3.6 and 10,8Mbps, with PCoIP it start with a minimum of 9.2Mbps and the maximum is dictated by the line capacity. In my opinion there will be very few 100% PCoIP implementations. PCoIP will be used for the power users which need the graphical muscle. The rest can perfectly manage and have great performance with an RDP connection.
For this networking capacity 2 gigabit network interfaces would be sufficient for both the RDP and the limited PCoIP scenario. With a total capacity of 2Gbps each of the 72 virtual desktops has 27Mbps to fill up.
View 4 building blocks In essence we now have VMware View 4 building block for Windows XP and Windows 7 virtual desktops.
(* value mentioned for linked clone scenarios are based on minimal deltas 10% and maximum 25%.)