Friday, February 26, 2010

The Future of VDI in 2010

It is doubtless there is going to be a lot of traction in the VDI arena in the coming year fuelled by Windows7 and the continued uptake of Server Virtualization. I see many POC's in process and operations asking how can VDI fit with their user demographics and business profile, and I have to admit it does make sense to least put all this on the table and at least discuss.

I see Citrix and Vmware have majority play in this arena due to the maturity, functionality and scalability of their product suites.

I do not want to get into the discussion of Terminal Server versus VDI all I will say is that I believe they have different use cases and will coexist in most environments and there will be a shift towards a true managed desktop. I have just watched some interesting interviews with the Vmware Desktop CTO Scott Davis and Citrix Desktop CTO Harry Labana on the views on the current and future road maps for both their products
They both agree that VDI is not the finished article and their will be interesting developments around client hypervisors this year.

I also agree that desktop virtualization has a lot of user interaction as opposed to server virtualization which has minimal interaction and thus creates it's own set of problems. I think for a true managed VDI desktop you will need to take a layer cake approach for the OS, Applications, User data and profiles to be truly effective but this means you need to use the likes third party products like Appsense and App-V which brings the CAPEX up considerably.

The main problem I see with Citrix Xendesktop and Vmware View is that their disk provisioning technologies (provisioning server and View composer) do not really work as they say on the tin. The main goal of these provisioning technologies are have a "Gold Image" to save disk space and make operations aligned with deployment and patching more streamlined.

Recent advancements in Vsphere with thin-provisioning at Virtual machine level make disk space less of priority and the main bugbear is as all Gold images will have a master image and linked differential file if you need to update the master image you  lose any information in the differential file as this is at block level. This can cause issues if you want a persistent image due user defined data kept in the delta file or if the master is recomposed you will see the same issues again around this data.

Brain Madden has wrote a great explanation here:

It is far simpler to either to take an one-2-one approach to your image or use non persistent "gold images" .

Let see how it things pan out this year...


Post a Comment

Copyright 2009 Virtually Anything. Powered by Blogger Blogger Templates create by Deluxe Templates. WP by Masterplan