Storage optimization with Atlantis
In our quest for native PC performance in a VDI solution we are always looking for better, faster and simpler storage solutions. To improve storage performance several vendors have introduced different solutions like controller cache, SSD disks, flash memory. Last week I received an e-mail from a colleague regarding ‘Atlantis Computing Uses Diskless VDI Architecture Made Possible by the Cisco Unified Computing System‘. This got my attention, what is Atlantis and how does this storage optimization work?
The Atlantis storage optimization works at the Windows NTFS protocol layer to offload virtual desktop I/O. When the Microsoft Windows operating system and applications request I/O from storage, Atlantis intercepts and inline deduplicates the I/O stream before it reaches the storage. Up to 90% of all I/O requests are processed in RAM. This enables native PC storage performance characteristics which VDI solution require. VDI solutions typically require significant amounts of storage in terms throughput and capacity. A Windows 7 VDI desktop might use 30 GB storage per desktop and need a throughput of up to 120 IOPS per desktop during startup and logon periods. A small environment of 100 desktops (full clones) would require 3 TB and 12.ooo IOPS. To serve 3 TB of storage I only need 7 SAS disks of 600GB in a RAID6 configuration. But this will only do 1000 IOPS and then I’m not even taking RAID penalty and read/write ratios into account. To achieve 12.000 IOPS I need at least 80 SAS 15k rpm disks. Even when I use the smallest 146 GB SAS disks I get 11TB of space which is IOPS bond so I need to leave 8TB of unused space.
So if you want to improve storage efficiency in a VDI environment you have to reduce the amount of IOPS needed or improve the throughput of your storage solution. Reducing the amount of IOPS needed for the Windows operating system is possible but will only achieve a minimum reduction but luckily over 90% of the IOPS consumed by the Windows operating system are used to service Windows operating system and application IO traffic. Only a small part is needed to store unique data.
This is the area where Atlantis does its ‘magic’.
Traditional deduplication is performed as a post-process activity, applied to storage volumes after data has been committed to it. While data deduplication significantly reduces the data storage requirements of a virtual desktop environment, it does nothing to reduce the IOPS load on the storage infrastructure, meaning that all data must be written to the storage fabric before deduplication is performed.
Atlantis performs inline deduplication in real-time, before any I/O transactions reach the storage fabric. Atlantis claims this eliminates up to 90% of I/O traffic, the IOPS requirement for the storage infrastructure decreases by a similar percentage.
The Windows operating system generates disk I/O which it optimizes by design so that data is read from and written to disk sequentially so that it can maximize overall system performance. In VDI environments there is no direct connection between the desktop operating system and any physical disk. Because the disks are shared amongst other guest operating systems this operating system exclusivity is no longer valid. The result of combining all these different sequential I/O streams is a random I/O characteristic that decreases storage and desktop performance and in turn reduces the effectiveness of any available storage cache modules. Because the number of desktops per physical server is much higher than with server virtualization, this effect increases.
The Atlantis virtual appliance collects the small block random I/O and combines them into larger blocks of sequential I/O which are send to the storage, increasing storage and desktop performance.
Caching is an essential technology for reducing loading on data center storage services. However conventional caching technologies are often only effective with read transactions. VDI workloads, especially during peak activities such as startup and logon, tend to be up to 80% write-based and thus receive little benefit from read-caching services. In addition, traditional SAN/NAS storage systems use simple block-based caching techniques that lack the NTFS file system awareness that is needed to efficiently determine what data should be cached and how it should be written to disk. Furthermore, SAN/NAS-based caches are located directly on the storage system, remote from the virtual infrastructure hosting the VDI environment.
The Atlantis virtual appliance uses a virtualized data format they call Flocks (File and Block I/O) that has awareness of the windows NTFS file system, file information and storage of block data. This technology enables Atlantis to extract the state of I/O in real-time, making it highly efficient at optimizing I/O and intelligently process I/O traffic locally in memory. As a software virtual appliance, Atlantis can be easily deployed to cache data in memory on the same hypervisor instances responsible for hosting every virtual desktop. As a result, Atlantis claims it can eliminate up to 90% of I/O at the source, removing the need to send it over a network or write it to disk.
This all sounds very promising but we all know, ‘nothing is for free’, so what’s the catch? One of the thing I would like to know is, what’s the CPU and memory load caused by the Atlantis virtual appliance? When performing such complex storage actions, you need sufficient CPU and memory resources but this impacts the overall consolidation ratio. If the storage optimization, deduplication and caching uses 10-30% of the system resources, you will need additional hardware to host the same amount of servers/desktops and the more resources it needs the tougher the business case.
Another point of interest is, does Atlantis live up to the promises made regarding the alleged savings of up to 90%? Of course these numbers are sales numbers and it’s up to 90%. But recently I heard about some Atlantis test results from renowned industry parties, which aren’t so positive as the sales number want you to believe. The first tests do show a huge decrease of read I/O but they also show an increase in write I/O. Now, I don’t want to claim that the myth is busted, further investigation and testing is required, but you should definitely try before you buy.
- Three Dimensions of Storage Sizing & Design –… by Edwin Weijdema
- Three Dimensions of Storage Sizing & Design – Part 2: by Edwin Weijdema
- What’s new in vSphere 6 Storage by Erik Scholten
- Three Dimensions of Storage Sizing & Design – Part 3: by Edwin Weijdema
- Webinar: Storage Best Practices for Hyper-V & SQL… by Erik Scholten
The founder and driving force behind VMGuru. With over 20 years experience in IT, he now works as a Cloud Management Specialist at VMware Benelux. He worked as technical consultant, pre-sales and solutions architect for several systems integrators.
He’s a long time VMware VCP, VCP Desktop, VCA, VSP and VTSP, vExpert Cloud (2017) and 9 year vExpert (2009 – 2017).