Friday, May 27, 2011

The Tale of Two V's - Virtual Desktop Infrastructure (VDI) & The Virtual Storage Platform (VSP)

Following on from my recent blog about the Virtual Storage Platform (VSP) from Hitachi Data Systems (HDS), I figured it would be a good idea to elaborate on how it will be deployed in a real life situation. 


The current client I work for is in the process of deploying Virtual Desktop Infrastructure (VDI). The product is called “VMware View” and they currently have around 200 of a planned 750 sessions deployed. 


Before we get in to the core storage aspects of this project, let’s ask ourselves some key questions:


What are the requirements for getting a VDI project from planning to execution?


VDI holds many merits, however the infrastructure requirements to run such an environment are substantial. One of the key ideas that gets thrown around when considering VDI is: “Great! We can throw away our current desktops and replace with Thin Clients which are less than half the size of a regular PC, have a lower administration overhead, have no moving parts and as a result draw less power”.



Thin Client Picture Reference, courtesy of http://www.hp.com/:

This is all well and good with desktop savings of approximately 60% if comparing a $1000 desktop to a $400 thin client and centralised administration, but this can soon be absorbed when deploying the back-end infrastructure.

This can include at a “high level”:

  • Storage - SAN or NAS (We run SAN and are moving to a dynamically pooled, 3 tier solution)
  • Storage Connectivity - Fibre Channel or Ethernet Switches (We are moving to 8GB FC)
  • Network Connectivity (We are moving to 10GB Ethernet)
  • Server Hosts - Blade or Server (We are using IBM 3850 X5’s with 7560 CPU’s (4 sockets with 8 cores each @ 2.26Ghz), 256GB RAM & 146GB SAS 2.5” local drives)
  • Licensing – Depends on licensing agreement with VMware, but this isn’t cheap!
  • Cooling & Power – Dependent on current environmentals
  • Thin Client Terminals (We have deployed HP)

Another thing you cannot afford to discount from a project like this is user acceptance and experience, given that their beloved desktops will be disappearing. The change needs to seamless and ensure that no functionality or performance is lost. The best thing to do here is deploy a POC (proof of concept) that includes key people from around the organisation and have the acceptance spread organically.


In summary of this information, you can see the importance of carrying out detailed analysis around cost savings, initial outlay and user experience. Luckily for this particular client; the desktop fleet was reaching almost 5 years of age, existing infrastructure could be leveraged for a POC (Server Hosts, SAN & storage connectivity). A greenfield project is a whole different ball game, SHOW ME THE MONEY!!

How to build storage for a Virtual Desktop Infrastructure (VDI) project?


When I first approached this, I instinctively contacted the vendor for some ballpark I/O (input / output operations) values. No such luck – the technology was too new.


**NB: We started with View 4.0 and have gradually moved to 4.6 as the technology matured and was more widely accepted.


To begin with; a finger was licked and placed in the air…


Then, a number of RAID5 (4D+1P) 146GB 15k SAS drives were configured and connected using multi-path 4GB fibre channel links (HDS AMS2500).
We quickly saw the profile for VDI (under 4.0) and were able to make a call on what was required for the disk build. To cope with the high write load of around 80% compared to 20% read, a RAID10 (4D+4D) configuration was deployed. Other key statistics observed were a read cache hit of around 50-60% and 10-12 I/O’s per Windows XP SP3 VDI session.


(All statistics had around a 25% overhead added to ensure that a move to Windows 7 in the future was possible)


NB: To gain an understanding of the underlying VDI disk architecture, see the diagram below for reference, courtesy of http://www.myvirtualcloud.net/):

At this point after multiplying these figures across 750+ users, we realised that a 'generic' disk architecture wouldn't be enough and started talking about SSD drives for the replicas and the ability to dynamically pool and tier our storage solution for the other disks. These two pieces of technology not only reduce the number of spindles required but allow for huge performance scalability.


NB: See my blog entry here for more information on the topic of dynamic provisioning: http://cjrnz.blogspot.com/2011/03/vmware-disk-formats-dynamic.html . Tiering is the use of more than 1 disk type & RAID type per pool and then moving pages of data between the tiers according to how "hot" the data is. (Page sizes will be dependent on your storage vendor)


What about the Standard Operating Environment (SOE)?


You need to ensure your SOE (Gold Master) for VDI users is as refined as possible, with any 'extras' being deployed by VMware Thinapp.(The 'thinner' this is, the better your performance will be, not to mention the reduction in administration overheads)
Once your SOE is under control you can then create a Gold Master in VMware View which will be used as a source for your replica(s) and finally the linked clones.


Which Thin Client Should I use?


Most people think of a thin client as a “dumb terminal” and believe that not much thought or research is required in selecting one. WRONG!
Yes, the processing may have been moved to the server but you still need an easy to configure, scalable and manageable desktop unit. 


·         Key items:

  • Central management console for firmware updates
  • Easy to deploy configuration file
  • Easy lockdown of terminal
  • Operating System 
  • VMware View Client Integration
  • Hardware options (multi-screen users, pass-through of devices)
  • Physical Size


What server components are required for VDI?


VDI Hosts run the same as hosts in a normal VSpher server environment, so depending on your VDI session workloads, you should specs the servers accordingly and consult the vendor if unsure. 
On top of all of this is the View Composer Software which is the 'heart' of VDI. This brings together the physical ESX hosts, "connection-broker" servers which direct internal+external session requests, "security" servers which field VDI requests from outside of your organisation (DMZ Bound) and finally the desktop virtual machines.


Summary

For this particular customer, VDI has been a great success.
By utilising the existing infrastructure to build a 'true' POC and scoping all new hardware for worst case scenarios, we were able to architect a high performing and highly scalable infrastructure model. This translates to a hugley positive user experience and this in itself measures the success of any VDI project!

Key Planning & Consideration Areas:

  • Detailed Cost analysis around Hardware Costs
  • Environmental Benefit Analysis
  • Proof of Concept (POC) Testing
  • Detailed Performance analysis from POC
  • Establishing a 'solid' SOE



If you have any questions about the topics I have talked about, please get in contact as I would be more than happy to assist.

Technical Links:

http://www.vmware.com/products/view/features.html
http://en.wikipedia.org/wiki/Storage_virtualization

Hitachi Data Systems Virtual Storage Platform (VSP)

With the purchase order now received at the vendors end, I’ve decided to write an article about the new enterprise VSP storage array from Hitachi Data Systems (HDS).
Over the past few months I have been researching a number of different enterprise arrays with the hope of meeting stringent criteria:
  • Green (not in colour but in terms of its carbon footprint – this means power draw, cooling and hardware density)
  • Scalability
  • Redundancy
  • Ability to mix with existing arrays (non vendor specific)
  • Support for mainframe connectivity (FICON point to point)
  • Storage Tiering (Tiering of data across multiple storage technologies such as SSD, SAS and SATA so that high I/O is dynamically elevated to the correct tier)
  • Thin Provisioning (Ability to only allocate storage that is actually used and reclaim overheads)
  • VAAI (VMware Array Application Integration with Virtual Center). For those of you who aren’t aware what this means, it involves 3 different VMware API’s that ultimately improve efficiency:
  1. Hardware Locking Offload – Ability to have more than one ESX host access and then write to a LUN/Datastore simultaneously, assisting in operations such as VMotion, creating new VM’s and deploying from template.
  2. Write Same – When writing to a VMFS LUN/Datastore the data must first be zeroed out completely using a SCSI write command, this ensures integrity of data. By offloading the repetitive commands of identical zero blocks to the array, tasks such as formatting and reallocation are dramatically reduced in terms of I/O – up to 10x.
  3. Fully Copy – Typically VM’s that are being vmotioned, cloned or even created the data must first be read by the host before being written back to the array. This is no longer the case as the API can now handle the task, reducing times by more than 50%.
**If you’d like to read a good article which references actual performance gains from the technology, go here: http://www.yellow-bricks.com/2011/03/24/vaai-sweetness/
 
The criteria above is specific to a customer’s requirements but I would imagine it to be at least 80% is relevant for most large corporates who employ SAN storage and are looking to create a totally dynamic and scalable storage solution.

As mentioned above I looked at a few different arrays such as the IBM V7000 and the NETAPP FAS3210. Now, I know a lot of you will be asking the question… where is the VMAX or VPLEX from EMC? Not this time unfortunately (I have a background in EMC SAN so I was disappointed not to have at least seen a demo).

Each of these arrays displayed impressive feature sets, however at the end of the day it was the VSP from HDS that took honors. Albeit this box isn’t the best looking with its green face-plates (image below), but its features have truly put it ahead of the class. Just to name a few that were stand outs:

  • 2.5” 10K drives for improved array density (equivalent to 3.5” 15k). Up to 256 drives in your initial rack with controllers and 384 for each subsequent.
  • VAAI functionality (first to market out of all vendors, which begs the question… where were you EMC, I thought you guys owned VMware???) http://www.theregister.co.uk/2011/02/08/hds_vaai_first/
  • 6GB SAS switched back-end
  • 8Gbps Fibre Channel Connectivity
  • Ability to reverse the role of front end directors (FEDS) in to back end directors (BEDS)
  • Priceless peace of mind knowing that Hitachi Data Systems employ disk from Hitachi Global Storage Technologies. End to end manufacturing has its advantages!
  • Scales up to 1TB of flash
**NB: The unit still needs 8 x 32amp feeds (2 for each of the 4 x PDU’s)

With the array only a few weeks away from arriving on the floor it’s time to begin running fibre and finalising its initial uses. For this particular customer we will be rolling out its features in phases and the 1st phases includes:

  • Dynamic Tiering
  • Dynamic Pooling
  • FED to BED virtualization for existing arrays

In my next blog I will talk about the new found relationship that this array will have with Virtual Desktop Infrastructure (VDI) – 750 x Windows XP 32bit Desktops running on 6 vSphere 4.1 hosts with you guessed it… a VSP at the back end!

VSP Image Reference Courtesy of HDS.COM: