Best Practices running VMware with NFS
Best Practices running VMware with NFS
There are several ways to store your Virtual Machines that run on your VMware Cloud Backend storage. You can store them locally on internal storage or on direct attached storage (DAS). Another way is to tie your ESXi servers to a central backend storage network, with protocols like FC and iSCSI (Block) or through NFS (File). Network Attached Storage (NFS) is a solid, mature, high available, high performing foundation for virtualization environments.
I have helped several customers’ last couple of years to make sure that their virtualized environment is stable and high performing. Often when there are (performance) problems people tend to focus on a specific part of the infrastructure like, network or storage. Always look at it as a complete chain from the user using an application to the location where the data and/or application is stored.
This blog post will give an overview of deployment considerations and best practices for running with Network Attached Storage in your VMware environment. Hoping it gives some guidance and solves some often seen issues with customers. You can also have a high performing, stable NFS storage foundation for your VMware virtualized environment!
- Make sure your VMware environment is updated and runs on patch 5 or newer if you have ESXi 5.5 and on Update 1a if you run with ESXi 6.0
- Run with portfast network ports on the network switches if you have STP enabled.
- Also check your network settings on the ESXi side, the switches used in between and of course on the storage side.
- If you use NFS version 3 and NFS version 4.1, do not mix them on the same volumes/data shares.
- Also in a mixed VMware environment it is best to run NFS version 3 all over.
- Separate the backend storage NFS network from any client traffic.
- Check your current design so that all paths used are redundant, so high availability is covered to your clients.
- Within your naming convention just use ASCII characters for your NFS network topology to prevent unpredictable failures.
- Always refer to your storage-array vendor’s best practices for guidelines, to run optimal with NFS in your environment.
- Check and adjust the default security, because NFS version 3 is unsecure by default.
- Configure the advanced setting for NFS.MaxVolumes, Net.TcpipHeapSize and Net.TcpipHeapMax.
Why use NFS?
VMware has supported IP-based storage for almost a decade now. NFS storage was introduced as a storage resource that can be shared across a cluster of VMware ESXi hosts. NFS was adopted rapidly last decade because of the combination of cost, performance, availability and ease of manageability. VMware also made sure that the capabilities of VMware ESXi on NFS are similar to those of ESXi on block-based storage. A big pro is that you can use multiple storage platforms and protocols tied to a VMware environment, if needed.
Compared to iSCSI and FC, NFS is relative easy to design, configure and manage. When configured correctly it offers strong performance and stability. Deployment of ESXi with IP based storage is common and widely seen in the field.
What is NFS?
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing an user on a client computer to access files over a network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol. Source: wiki.
NFS Version used?
An NFS client built into ESXi uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume that is located on NFS storage. The ESXi host can mount the volume and use it for its storage needs. Up until version 6.0 VMware only supported NFS version 3. To ensure interoperability with all versions it is recommended that you reduce the maximum Client and Server version. With the release of vSphere 6, VMware now also supports NFS 4.1.
NFS Protocols and vSphere Solutions
NFS Version Upgrades
vSphere does not support automatic datastore conversions from NFS version 3 to NFS 4.1. If you want to upgrade your NFS 3 datastore, the following options are available:
· You can create a new NFS 4.1 datastore, and then use Storage vMotion to migrate virtual machines from the old datastore to the new one.
· Use conversion methods provided by your NFS storage server. For more information, contact your storage vendor.
· Unmount from one version and then mount as the other.
Which version of VMware are you running?
Running ESXi 5.5 below Patch 5? Than you are running on a build number lower than 2718055 and it is time to upgrade now!
We have seen several virtual environments that ran below patch 5 and had strange and unpredictable behavior from time to time. Like disconnects (offline, greyed out) which could lead to an all paths down (APD) event (for 45 minutes) or even permanent device loss (PDL) or management interfaces which freeze for several minutes or getting stuck majorly. After updating those infrastructures to newer builds you could restart an ESXi server and datastores where almost instantly back online and visible. Also managing storage in vCenter server becomes more fluid than before. And a rescan completes about 3 to 4 times faster.
All Paths Down (APD)
There are several critical fixes in the ESXi builds for VMware ESX 5.5 that could interfere with a correct HA storage cluster failover within 180 seconds on failover.
PDL and APD and PSA
An ESXi host might show as disconnected in vCenter Server when the hostd process is not responding. This issue might occur when there is a Permanent Device Loss (PDL) and the underlying file system fails to notify pending All Paths Down (APD) events to the Pluggable Storage Architecture (PSA) layer.
Separate the backend storage NFS network from any client traffic. This can be done using VLANs, network segmentation, or dedicated switches. This will allow for maximum performance and prevent unwanted access to your storage environment.
To achieve high availability, the LAN on which the NFS traffic will run needs to be designed with availability, downtime-avoidance, isolation, and no single-point of failure (SPOF) in mind. Often when HA is configured, STP is also configured in the network topology.
Spanning Tree Protocol
Another cause we found that triggered or added to major downtime in an APD event is that the current physical port settings on the network switches were active with STP and were not configured as Edge port or PortFast port. The second recommendation relates to the use of switch ports when Spanning Tree Protocol (STP) is used in an environment. STP ensures that there are no network loops in a bridged network. This is accomplished by disabling network links and ensuring that there is only a single active path between any two network nodes. If there are loops, it will have a severe performance impact on your network with unnecessary forwarding of packets, eventually leading to a saturated network.
A switch port can exist in various states under STP while the algorithms determine if there are any loops in the network. For example, switch port states can be blocked, listening or forward. Transitioning between the various states can take some time and this can impact applications that are network dependent.
This means that the switch ports will immediately change their forwarding state to active, to enable the port to send and receive data. Refer to your storage-array best practices for advice on this setting, to determine if it is appropriate for your storage array. In addition to using this setting on switch ports, it is also the recommended setting for the ESXi servers in the same environment.
ESXi 5.5 fails to restore NFS mounts automatically after a reboot (KB 2078204)
· Rebooting an ESXi 5.5 host reports the NFS datastore that it uses as disconnected.
· The NFS datastore is grayed out in the vSphere Client.
This issue occurs if the spanning tree protocol setting on the physical network switch port(s) is not set to portfast. If the ESXi host has network connectivity issues during boot time, the NFS mount process may time out during the spanning tree protocol convergence. This is a known issue with ESXi 5.5/6.0. To work around this issue, set the spanning protocol to PortFast for all physical switch ports used by ESXi hosts.
Previously called PortFast is now called an Edge port. So if you use Nexus switches make sure those ports connected to the ESXi servers and to the Storage array are configured correctly.
Edge ports, which are connected to hosts, can be either an access port or a trunk port. The edge port interface immediately transitions to the forwarding state, without moving through the blocking or learning states. (This immediate transition was previously configured as the Cisco-proprietary feature PortFast.) Read more.
PortFast is also used by Arista in a STP setup in their EOS system. Port-specific spanning-tree configuration comes from the switch where the port physically resides. This includes spanning-tree portfast. Read more.
The Dell Networking PowerConnect switches also can be adjusted to be using PortFast on STP ports. Read more.
VMware NFS Storage Guidelines and Requirements
Make sure that the NFS storage/server exports a particular share as either NFS 3 or NFS 4.1, but does not provide both protocol versions for the same share. This policy needs to be enforced by the storage/server because ESXi does not prevent mounting the same share through different NFS versions. Ensure that the NFS volume is exported using NFS over TCP.
To use NFS 4.1, upgrade your vSphere environment to version 6.x. You cannot mount an NFS 4.1 datastore to hosts that do not support version 4.1. However NFS 3 and NFS 4.1 datastores can coexist on the same host.
You cannot use different NFS versions to mount the same datastore. NFS 3 and NFS 4.1 clients do not use the same locking protocol. As a result, accessing the same virtual disks from two incompatible clients might result in incorrect behavior and cause data corruption.
A VMkernel port group is required for NFS storage. You can create a new VMkernel port group for IP storage on an already existing virtual switch (vSwitch) or on a new vSwitch when it is configured. The vSwitch can be a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS). NFS 3 and non-Kerberos NFS 4.1 support IPv4 and IPv6.
When you mount the same NFS 3 volume on different hosts, make sure that the server and folder names are identical across the hosts. If the names do not match, the hosts see the same NFS version 3 volume as two different datastores. This error might result in a failure of features like vMotion. An example of such discrepancy is entering NX-NS-01 as the server name on one host and NX-NS-01.VMGURU.COM on the other. When using NFS 4.1 you will not run into this problem.
Always use ASCII characters to name datastores and virtual machines or unpredictable failures might occur. If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual switches and physical switches. For information, see the vSphere Networking documentation.
Link Aggregation Control Protocol (LACP)
Use LACP for port aggregation and redundancy. This provides better overall network performance and link redundancy. LACP enables you to combine multiple physical interfaces into a single logical interface. Now it is debatable whether this can improve throughput or performance because we are still limited to a single connection with NFS version 3, but it does enable protection against path failures. You will have several NFS workers when you run with multiple ESXi hosts, so they get balanced over the links. Many NFS-array vendors support this feature at the storage controller port level. Most storage vendors will support some form of link aggregation, although not all configurations might conform to the generally accepted IEEE 802.3ad standard. The best option would be to check with your storage vendor. One of the features of LACP is its ability to respond to events on the network and detect which ports should be part of the logical interface.
vSphere implementation of NFS supports NFS version 3 in TCP. Storage traffic is transmitted in an unencrypted format across the LAN. Therefore, it is considered best practice to use NFS storage on trusted networks only and to isolate the traffic on separate physical switches or to leverage a private VLAN. All NAS-array vendors agree that it is good practice to isolate NFS traffic for security reasons. This would mean isolating the NFS traffic on its own separate physical switches or leveraging a dedicated VLAN (IEEE 802.1Q)
Mount and R/W security
Many NFS servers/arrays have some built-in security, which enables them to control the IP addresses that can mount its NFS exports. It is considered a best practice to use this feature to determine which ESXi hosts can mount and have read/write access to the volumes that are being exported. This prevents unapproved hosts from mounting the NFS datastores.
The default setting for the maximum number of mount points/datastore (NFS.MaxVolumes ) an ESX server can concurrently mount is 8. The current version of ESXi 6.0 can hold 256 if adjusted. If you increase max NFS mounts above the default setting of 8, make sure to also increase Net.TcpipHeapSize as well. If 256 mount points are used, increase Tcpip.Heapsize to 32MB and Net.TcpipHeapMax to 512 for ESXi 5.5.
To edit advanced configuration options, select the ESXi/ESX host in the Inventory Panel, then navigate to Configuration > Software > Advanced Settings to launch the Settings window.
Set these values:
Under NFS, Select NFS.MaxVolumes: Limits the number of NFS datastores which can be mounted by the vSphere ESXi/ESX host concurrently. The default value is 8, and can be increased to a maximum specific to the version of ESXi/ESX:
- ESXi/ESX 3.x: Set NFS.MaxVolumes to 32
- ESXi/ESX 4.x: Set NFS.MaxVolumes to 64
- ESXi 5.0/5.1/5.5: Set NFS.MaxVolumes to 256
- ESXi 6.0: Set NFS.MaxVolumes to 256
Under Net, Select Net.TcpipHeapSize: The amount of heap memory, measured in megabytes, which is allocated for managing VMkernel TCP/IP network connectivity. When increasing the number of NFS datastores, increase the default amount of heap memory as well:
- ESXi/ESX 3.x: Set Net.TcpipHeapSize to 30
- ESXi/ESX 4.x: Set Net.TcpipHeapSize to 32
- ESXi 5.0/5.1/5.5: Set Net.TcpipHeapSize to 32
- ESXi 6.0: Set Net.TcpipHeapSize to 32
Under Net, Select Net.TcpipHeapMax: The maximum amount of heap memory, measured in megabytes, which can be allocated for managing VMkernel TCP/IP network connectivity. When increasing the number of NFS datastores, increase the maximum amount of heap memory as well, up to the maximum specific to the version of ESXi/ESX host:
- ESXi/ESX 3.x : Set Net.TcpipHeapMax to 120
- ESXi/ESX 4.x: Set Net.TcpipHeapMax to 128
- ESXi 5.0/5.1: Set Net.TcpipHeapMax to 128
- ESXi 5.5: Set Net.TcpipHeapMax to 512
- ESXi 6.0: Set Net.TcpipHeapMax to 1536
These settings enable the maximum number of NFS mounts for vSphere ESXi/ESX. Changing Net.TcpipHeapSize and/or Net.TcpipHeapMax requires a host reboot for the changes to take effect.
Running VMware with NFS connected storage is a great and easy to manage way to store and manage your VMs in a virtual environment. NFS is way easier to setup than for instance iSCSI or FC. I think Ethernet networks are a great, fast and cost efficient way to connect your VMware environment to your storage arrays. Looking at Ethernet networks becoming faster and faster were 10Gbps is (almost) standard in the datacenter, 40GbE being rolled out and 100GbE around the corner.
What are your experiences with VMware in combination with NFS storage? Did you run into trouble like APD and/or PDL events, non-responsive datastores? Have you updated your environment yet? What storage and network are you using and did you tune it in any way? Please share your experience in the comment section below! Also if you have specific settings for network vendors or storage vendors please share in the comments so I can add them to the list!