Friday, May 27, 2011

The Tale of Two V's - Virtual Desktop Infrastructure (VDI) & The Virtual Storage Platform (VSP)

Following on from my recent blog about the Virtual Storage Platform (VSP) from Hitachi Data Systems (HDS), I figured it would be a good idea to elaborate on how it will be deployed in a real life situation. 


The current client I work for is in the process of deploying Virtual Desktop Infrastructure (VDI). The product is called “VMware View” and they currently have around 200 of a planned 750 sessions deployed. 


Before we get in to the core storage aspects of this project, let’s ask ourselves some key questions:


What are the requirements for getting a VDI project from planning to execution?


VDI holds many merits, however the infrastructure requirements to run such an environment are substantial. One of the key ideas that gets thrown around when considering VDI is: “Great! We can throw away our current desktops and replace with Thin Clients which are less than half the size of a regular PC, have a lower administration overhead, have no moving parts and as a result draw less power”.



Thin Client Picture Reference, courtesy of http://www.hp.com/:

This is all well and good with desktop savings of approximately 60% if comparing a $1000 desktop to a $400 thin client and centralised administration, but this can soon be absorbed when deploying the back-end infrastructure.

This can include at a “high level”:

  • Storage - SAN or NAS (We run SAN and are moving to a dynamically pooled, 3 tier solution)
  • Storage Connectivity - Fibre Channel or Ethernet Switches (We are moving to 8GB FC)
  • Network Connectivity (We are moving to 10GB Ethernet)
  • Server Hosts - Blade or Server (We are using IBM 3850 X5’s with 7560 CPU’s (4 sockets with 8 cores each @ 2.26Ghz), 256GB RAM & 146GB SAS 2.5” local drives)
  • Licensing – Depends on licensing agreement with VMware, but this isn’t cheap!
  • Cooling & Power – Dependent on current environmentals
  • Thin Client Terminals (We have deployed HP)

Another thing you cannot afford to discount from a project like this is user acceptance and experience, given that their beloved desktops will be disappearing. The change needs to seamless and ensure that no functionality or performance is lost. The best thing to do here is deploy a POC (proof of concept) that includes key people from around the organisation and have the acceptance spread organically.


In summary of this information, you can see the importance of carrying out detailed analysis around cost savings, initial outlay and user experience. Luckily for this particular client; the desktop fleet was reaching almost 5 years of age, existing infrastructure could be leveraged for a POC (Server Hosts, SAN & storage connectivity). A greenfield project is a whole different ball game, SHOW ME THE MONEY!!

How to build storage for a Virtual Desktop Infrastructure (VDI) project?


When I first approached this, I instinctively contacted the vendor for some ballpark I/O (input / output operations) values. No such luck – the technology was too new.


**NB: We started with View 4.0 and have gradually moved to 4.6 as the technology matured and was more widely accepted.


To begin with; a finger was licked and placed in the air…


Then, a number of RAID5 (4D+1P) 146GB 15k SAS drives were configured and connected using multi-path 4GB fibre channel links (HDS AMS2500).
We quickly saw the profile for VDI (under 4.0) and were able to make a call on what was required for the disk build. To cope with the high write load of around 80% compared to 20% read, a RAID10 (4D+4D) configuration was deployed. Other key statistics observed were a read cache hit of around 50-60% and 10-12 I/O’s per Windows XP SP3 VDI session.


(All statistics had around a 25% overhead added to ensure that a move to Windows 7 in the future was possible)


NB: To gain an understanding of the underlying VDI disk architecture, see the diagram below for reference, courtesy of http://www.myvirtualcloud.net/):

At this point after multiplying these figures across 750+ users, we realised that a 'generic' disk architecture wouldn't be enough and started talking about SSD drives for the replicas and the ability to dynamically pool and tier our storage solution for the other disks. These two pieces of technology not only reduce the number of spindles required but allow for huge performance scalability.


NB: See my blog entry here for more information on the topic of dynamic provisioning: http://cjrnz.blogspot.com/2011/03/vmware-disk-formats-dynamic.html . Tiering is the use of more than 1 disk type & RAID type per pool and then moving pages of data between the tiers according to how "hot" the data is. (Page sizes will be dependent on your storage vendor)


What about the Standard Operating Environment (SOE)?


You need to ensure your SOE (Gold Master) for VDI users is as refined as possible, with any 'extras' being deployed by VMware Thinapp.(The 'thinner' this is, the better your performance will be, not to mention the reduction in administration overheads)
Once your SOE is under control you can then create a Gold Master in VMware View which will be used as a source for your replica(s) and finally the linked clones.


Which Thin Client Should I use?


Most people think of a thin client as a “dumb terminal” and believe that not much thought or research is required in selecting one. WRONG!
Yes, the processing may have been moved to the server but you still need an easy to configure, scalable and manageable desktop unit. 


·         Key items:

  • Central management console for firmware updates
  • Easy to deploy configuration file
  • Easy lockdown of terminal
  • Operating System 
  • VMware View Client Integration
  • Hardware options (multi-screen users, pass-through of devices)
  • Physical Size


What server components are required for VDI?


VDI Hosts run the same as hosts in a normal VSpher server environment, so depending on your VDI session workloads, you should specs the servers accordingly and consult the vendor if unsure. 
On top of all of this is the View Composer Software which is the 'heart' of VDI. This brings together the physical ESX hosts, "connection-broker" servers which direct internal+external session requests, "security" servers which field VDI requests from outside of your organisation (DMZ Bound) and finally the desktop virtual machines.


Summary

For this particular customer, VDI has been a great success.
By utilising the existing infrastructure to build a 'true' POC and scoping all new hardware for worst case scenarios, we were able to architect a high performing and highly scalable infrastructure model. This translates to a hugley positive user experience and this in itself measures the success of any VDI project!

Key Planning & Consideration Areas:

  • Detailed Cost analysis around Hardware Costs
  • Environmental Benefit Analysis
  • Proof of Concept (POC) Testing
  • Detailed Performance analysis from POC
  • Establishing a 'solid' SOE



If you have any questions about the topics I have talked about, please get in contact as I would be more than happy to assist.

Technical Links:

http://www.vmware.com/products/view/features.html
http://en.wikipedia.org/wiki/Storage_virtualization

No comments:

Post a Comment