Wednesday, December 29, 2010

Citrix Xendesktop in 2011


I think 2011 will see a tremendous amount of traction in desktop virtualization, I say desktop virtualization and not VDI because I believe to absorb truly the benefits of a virtualized desktop you need to use it as enabler to a more efficient and centralized desktop management. To achieve these goals you need to move from a monolithic design to layered approach whereby the user environment space, operating system and applications are all separated in a layered fashion. These are then dynamically delivered to the end user on demand and from any device.
Citrix have recently released Xendesktop 5 and also made fundamental changes to their supporting suite of products and how they integrate into application and desktop delivery. Here is how I see things panning out....

Hello Xendesktop 5 - Goodbye Xendesktop 4
Xendesktop 5 - Xendesktop has undertook a complete overhaul, IMA has been removed and replaced with a SQL database which also means no mixed mode farms, but greater functionality and scalability. Existing SQL management and BCR techniques must be implemented as any failure of the SQL side of things means a loss of desktop connections. Desktop Director is smart way for 1st line support guys to manage and support any Xendestop 5 VDI infrastructure.

Hello Machine Creation Services - Goodbye Citrix Provisioning Server
I have always found Provisioning services as clunky and extremely complex. The requirement for physical servers and scalability has always made me think that there has to be better way to manage images. With MCS quick deploy you simply point to a master image and the linked clones are created using an identity disk and difference disk, and the Active Directory configuration via the AD identity service. Citrix suggest you use MCS for smaller deployments and Prov Server for the larger deployments as they scale better and utilize the different write cache mechanisms. I would suggest a different approach here for two reasons.

1. The use of HSP (hybrid storage pools) and proprietary storage software that augments the current SAS/SATA storage stack and utilize SSD disks for smart disk cache; thus providing the required IOPS and performance VDI environments demand. Products such as Nexenta ZFS, FalconStor's NSS, and Disk systems from Dell and NetApp provide different features around SSD. Fusion-io's io-Drive takes a different approach and sits direct on the PCI bus and hence there is no SAS controller overhead, they OEM these to Dell and HP and the IOP'S you can pull from these devices are huge.  A typical SMB/SME environment that has a requirement for say example 500 desktops each requesting an average 10 IOPS on a 70/30 read and write ratio would require 28 SAS 15k disks to supply 5000 IOPS. This could possibly mean 4 disk shelves and the CAPEX becomes quite expensive. A hybrid SSD solution would require less hardware and are great in terms of providing the required read IOPS to cover boot storms. Something like the NexentaStor also allows the use of SSD's running on agonistic disk shelves. Although more expensive, SSD also means less disks and thus reduced CAPEX and carbon footprint.

2. Xenserver Intellicache is supported in XenServer 5.6 FP1 but not yet on Xendesktop 5. Intellicache  utilizes SSD local storage to create a smart disk read cache and uses NFS to store the gold image, the great thing about Intellicache is that it will dynamically look into desktop read patterns and cache these locally. If you were to run 100 Windows 7 desktops you will find that read patterns will be very similar, thus the net affect is fast disk access and read data deduplication. The requirement for Intellicache is MCS, so again there is no need for Provisioning server.


Hello NFS - Goodbye iSCSI and Fibre Channel
Citrix recommend Xendesktop to be run from a NFS platform; the rationale behind this is file I/O works better than block based I/O for VDI and performance. When you start to look at large amounts of desktops you can see issues with SCSI locking. I also think this is a smart move as often enterprise storage arrays are used to house virtual desktops that provide the bells and whistles of such features as synchronous replication,VSS and dual connectivity. These are not needed generally for VDI disk access, the main requirements I see are NFS, Smart caching and deduplication. High availability is a topic of discussion on VDI environments but my view is that if you take a layer cake approach and implement stateless desktops you simply need to restore the gold image and use MSC to recreate the desktops in the event of failure. NetApp and Nexenta provide some simple cost effective NFS solutions with these features.

Hello Access Gateway 5.0 VPX - Goodbye Secure Gateway
Secure remote access is critical component of any Xendesktop solution this is also driven by the consumerization of IT as we speak and varied spectrum of mobile devices. Typically in the past you either use client VPN access or a SSL VPN such as Secure gateway. Secure gateway runs on Windows and is quite limited in its functionality. With the CAG 5.0 we now have a Linux appliance that includes enterprise features such as HA, Smart Access functionality and also support for Receiver and Two factor authentication. A Xendesktop and Xenapp platform licence is included with the appliance so this is a smart and cost effective move in my opinion. This model is also crucially available as a XenServer and VMware virtual appliance, I am a great advocate of virtual appliances where possible as they bring ease of deployment, scalability and management that sits great in a datacenter infrastructure.

*Citrix have also just released Branch Repeater VPX as a VMware virtual appliance, when you need QOS, data compression, de-duplication or plan to implement soft phones and VoIP these are a great fit.

Hello Receiver and Delivery Services - Goodbye Dazzle
Dazzle is replaced by a self service plug-in in receiver and nice new ITunes like interface; this when combined with Citrix Merchandising server provides a modular and manageable delivery mechanism for applications and plug-ins. The receiver front end has a similar look and feel across all platforms so as we look into multiple device access we get a similar end user experience whether they are accessing their desktop from a Mac, IPad or Thin client. I think this makes sense when we start to look at Google Chrome OS, Nirvana Phones and Open Cloud Access which are all on the horizon.
Wednesday, October 13, 2010

Veeam Backup Best Practice


If you are going to run a physical to virtual conversion, make sure you run a defrag before the conversion: the reason being is when the VM is backed up via CBT it will produce large incremental .VBR files if there is file fragmentation. This works on a 1MB block size, so a single 1k change would mean a 1MB increment, thus if there is a linear structure to file patterns this will help reducing the size of the VBR file.

Pages Files on Windows VM’s are backed up as default via a full Veeam backup, as this data contained in the page file is inconsistent this will in turn produce large incremental VBR files. This is especially an issue with something like Exchange that uses database caching. To solve this create a separate VMDK and place the Windows page file on this disk and when the back job is created exclude this disk.

Make sure you follow this KB article when creating data stores for virtual machines that will be backed up via Veeam or you may have problems when the snapshot is created.

    Tuesday, June 15, 2010

    Snapshot Script for Vsphere using yULM


    I have found this really cool script for presenting a graphical view of a snapshot chain within Vsphere via PowerCLI http://www.lucd.info/2010/03/31/uml-diagram-your-vm-vdisks-and-snapshots/
    I have amended the Script to include the functionality to work in a corporate environment behind a proxy server, you simply need to change the following two lines to reflect your target server name and proxy address.

    $vmName = "Servername"

    $diagram = Get-UMLDiagram $vmName Get-yUMLDiagram $diagram "C:\test\yULM.jpg" -show -proxy "10.*.*.*:8080"
    Snapshot



    Tuesday, April 06, 2010

    The Changing face of VDI


    I have spoke about this before, but I think there will be a huge amount of traction in the VDI space of the next 18 months, the main drivers as below
    1. Windows 7 adoption - a catalyst for change and a great product, it makes sense to at the very least evaluate VDI when deploying.
    2. Xendesktop 4/Vmware View - Both these products are now on release 4 are much more mature.
    3. VECD - This has now been scrapped by Microsoft for customers running Software Assurance.
    4. Unidesk - A product that ticks all the boxes when it comes to a fully managed desktop solution to dovetail with VDI, its in Beta at the moment but I am testing as we speak and will come back with a full review.
    5. Apple IPAD - I think this will be a usefull in some select business environments combined with a Citrix Client.
    6. Citrix Xen- Hyper-V - Upcoming memory over-commit support means more desktop density and thus better TCO.
    Let see how things pan out....
    Friday, March 05, 2010

    Vmotion in Vsphere


    I have just read something which had me thinking, should you increase the amount of Vmotions on a Vsphere host, from the current limitation of two per host. This can be easily achieved by editing the vpdx.cfg file on your virtual centre server as in this post http://www.boche.net/blog/?p=806. I tend to think it's there for a reason and in my experience I would advise against it.
    Ok so whats my thinking behind this?  Well if you examine the actual Vmotion process there are some caveats in there that could cause performance issues if you there is a lot of I/O (I am thinking Database and Application servers here)

    The following below is the Vmotion Process in depth as I understand it

    1: Command sent to VC to check prerequisites 
    2: Provision new VM on target host
    3: Precopy memory from source to target, with ongoing memory changes logged in a memory bitmap
    4: Quiesce VM on the source host and copy memory bitmap to target host
    5: Start VM on target host
    6: "Demand page" the source VM when applications attempt to read/write modified memory
    7: Note - The new VM comes up before ALL the memory is copied over. We can do this, because once the initial copy has taken place, subsequent changes to the app touch only a fraction of all the memory pages.
    8: "Background page" the source VM until all memory has been successfully copied
    9: Delete VM from source host

    I think the important issue here is point 5, the Virtual machine starts on the new Vsphere host BEFORE the memory map is copied over.

    I have seen issues in a production environment whereby you Vmotion from a host running at high Memory utilisation and with a guest with that has a lot of current I/O, it will become very slow and unresponsive. In essence it seems that when the VM is migrated a memory delta file will be created on the source host. This will then be copied to the target host, and if there is a shortage of physical memory on the source host a disk based swap file will be used which is considerably slower and hence possible performance issues.

    So to sum up the more concurrent Vmotions you run the more risk you have of problems.
    Friday, February 26, 2010

    The Future of VDI in 2010

    It is doubtless there is going to be a lot of traction in the VDI arena in the coming year fuelled by Windows7 and the continued uptake of Server Virtualization. I see many POC's in process and operations asking how can VDI fit with their user demographics and business profile, and I have to admit it does make sense to least put all this on the table and at least discuss.

    I see Citrix and Vmware have majority play in this arena due to the maturity, functionality and scalability of their product suites.

    I do not want to get into the discussion of Terminal Server versus VDI all I will say is that I believe they have different use cases and will coexist in most environments and there will be a shift towards a true managed desktop. I have just watched some interesting interviews with the Vmware Desktop CTO Scott Davis and Citrix Desktop CTO Harry Labana on the views on the current and future road maps for both their products
    They both agree that VDI is not the finished article and their will be interesting developments around client hypervisors this year.

    I also agree that desktop virtualization has a lot of user interaction as opposed to server virtualization which has minimal interaction and thus creates it's own set of problems. I think for a true managed VDI desktop you will need to take a layer cake approach for the OS, Applications, User data and profiles to be truly effective but this means you need to use the likes third party products like Appsense and App-V which brings the CAPEX up considerably.

    The main problem I see with Citrix Xendesktop and Vmware View is that their disk provisioning technologies (provisioning server and View composer) do not really work as they say on the tin. The main goal of these provisioning technologies are have a "Gold Image" to save disk space and make operations aligned with deployment and patching more streamlined.

    Recent advancements in Vsphere with thin-provisioning at Virtual machine level make disk space less of priority and the main bugbear is as all Gold images will have a master image and linked differential file if you need to update the master image you  lose any information in the differential file as this is at block level. This can cause issues if you want a persistent image due user defined data kept in the delta file or if the master is recomposed you will see the same issues again around this data.

    Brain Madden has wrote a great explanation here:


    It is far simpler to either to take an one-2-one approach to your image or use non persistent "gold images" .

    Let see how it things pan out this year...
    Tuesday, February 09, 2010

    Understanding VSS implementation in Vmware Backup and Replication products

    VSS can be nightmare to fully get to grips with in Vmware backups, Scott Lowe wrote a great interpretation in his blog here.

    http://blog.scottlowe.org/2010/02/09/partner-exchange-2010-session-techbc0320/trackback/ 

    I would like to think I have good understanding of this so here is my view of VSS and VMware in a simplistic form.

    In this example I will use Veeam utilizing the Vmware VSS provider backup on Windows 2003 running Microsoft Exchange. Ok so what is a VSS?  VSS is simply a framework that Microsoft introduced from Windows 2003/XP onwards that can coordinate with backup applications to produce a consistent and reliable copy of data, a VSS backup will be application consistent as opposed to Crash consistent, a good analogy would be to think of application consistent backup as a manual shut down of all services and then a copy and a crash consistent backup as simply to press the power off button on your server (good luck), The framework consists of 3 main components as below.
    1. Requester  Backup application (Veeam)
    2. Provider     Vmware Tools
    3. Writer        Application (Exchange)
    OK so how does this fit together in the above scenario?
    1. Veeam Kicks of a backup and sends a message via the Virtual centre SDK to locate the machine and prepare for a snapshot.
    2. Virtual Center locates the machine and sends a message via the VSS provider component in Vmware tools to start the Microsoft Volume Shadow copy service. 
    3. The Microsoft Volume service will enumerate it's VSS writers and ask them to prepare for a copy backup.
    4. The Exchange VSS writer coordinate with Exchange core components and will halt I/O flush any transactions in memory and then notify the VSS provider that all is OK.
    5. Virtual Center will proceed and create a snapshot.
    6. Veeam will now have access to read only copy of the VMDK and all writes will directed to the newly created delta file.
    The VSS writer is a crucial part of this framework as it crucially deals with making the data consistent, another analogy would to think of the writer as a airline pilot going though a checklists that a plane is safe to take off if anything is not OK the plane will not take off (sorry but i do like an analogy!)
    so in essence if it the writer cannot hold of the I/O or quiesce the data the backup will fail.

    As Scott points out in his blog the you will notice the Vmware Tools VSS provider has rightly or wrongly it has limitations in that it can only call on the VSS Copy function of the backup and this is only limited to application level in Windows 2003 as we speak, so if you run an application with a VSS writer like Exchange, AD or SQL in 2008 you will limited to a simple OS level data quiesce, and this backup will be only crash consistent at application level. This is a issue if your backup application or San based replication can only leverage the Vmware VSS provider via Virtual Centre (most do).

    Some people will argue that they will run guest based backups in conjunction with image based whereby the Guest backup will have full backup VSS functionally and will also deal with tasks such as database maintenance, this is a sensible as you can also run something like Eseutil as a option, it also should be noted that if you run something like CCR or Microsoft Data Protection manager that uses log shipping, a full VSS copy backup that truncates logs will cause issues as it will fall out of sync.
    So it's very much six of one and half a dozen of the other, and it is something you should give a lot of thought as with the new VStorage API's, Changed block tracking and greatly improved backup speed and functionality around Vsphere there will be a lot of focus on moving towards Image based backups.

    The good news is that if you use a backup or replication application that has a propriety backup agent that can be installed within the VM and have some synergy with Virtual Center you can leverage the full VSS functionally at different levels, this will cover Windows 2008 application level quiesce and you will also be able to perform tasks such as truncation of logs, good examples of this are Veeam, Falconstor and Backup Exec 2010 and also from a SAN replication perspective NetApp and the upcoming HP Lefthand SAN/IQ 8.5.

    So to sumise I think it is prudent to fully look into any solution on a ongoing basis and trial any products on POC basis if you can...you will sleep better at night!

    Friday, February 05, 2010

    VMware View 4.0 SSL web access

    Vmware View 4.0 is Vmware's Flagship VDI product. I like it but i think it has a long way to go before if matches the functionalty of Citrix Xendesktop it has the feel of a collection of products thrown together quickly (im thinking Thinapp, PCoIP, Propero broker).

    For example one the main drivers to adopting VDI would be the mobility and fuctionality of secure web access and impoved transport and display protocols in case of Citrix Xendesktop this would be HDX-ICA, Secure Gateway and for Vmware View, Security Server and PCoIP.

    How do they differ? Well if you want to use PCoIP vai a HTTP-SSL web front over the internet with VmwareView your have a problem, it's not supported, if you wish to use HTTP-SSL you will need to use RDP.
    With Xendesktop you simply create a Secure gateway and you can leverage the full features of HDX-ICA via a SSL VPN

    So the only option for Vmware View 4.0 is client VPN's whereby you would have direct access....see below

    http://communities.vmware.com/thread/243763

    It appears that the PCoIP uses UDP and is not supported via View web portal per se

    Come on VMware sort it out!
    Monday, February 01, 2010

    How to use SnapVMX to display detailed Vmware snapshot information

    SnapVMX
    Wednesday, January 13, 2010

    Why Veeam is a great solution for Vsphere backups

    Backups are always a cause for concern for any Vmware admin, I have spent many hours evaluating various products and here's why Veeam is the best solution for Image based backups.
    Everyone is familar with physhical guest based backups where by something like Backup exec it used and file and application level agents are used to leverage Microsoft VSS and perform database maintance plans that will deal with flushing transaction logs ect.
    The Business continuty plan provided by a guest based backup will have two main drivers.
    • RPO (recovery point objective) is the maximum acceptable amount of data loss , this will typically be 24 hours given that most backup windows are nightly
    • RTO (recovery time oblective) this will be how long it will take to get the data back to to restored state to this will usually mean catologing tapes, and configuration process for example to disimlar hardware 
    This could mean hours, sometimes days of downtime
    In my opnion one of the greatest benifits of VSphere is the functionalty based around Buisness Continuity.
    VSphere has a new set of Storage APIs and a new feature called CBT ( Changed block tracking) which greatly enhance the speed and fucntialty of backups, in short CBT creates USN's (update sequence numbers) for the VMDK disks that backup applications can hook into a quickly find out which blocks have changed since the last backup.

    So why Veeam??
    •  Easy to set up and configure with in half an hour
    •  Support for CBT
    •  The only true support for VSS application consistent backups in Windows 2008
    •  The ability to flush Exchange logs via VSS backups
    •  Nice enterprise interface via web front-end
    •  Great support and ongoing development
    •  Replication included as well as backup at great price point for SMB
    •  Run-time proprietry VSS agent
    •  Great Deduplication and Compression rates
    So to summize if you want a super functional VSphere image based backup and replication tool at a great price point choose Veeam.
    Wednesday, January 06, 2010

    Vsphere upgrade of Virtual NIC causes windows server to hang on bootup



    Has anyone seen a issue whereby a upgrade to Vmware Virtual hardware 4 to 7 or the Vmware tools are updated to include a new NIC card causes the rebooted server to hang on a Windows OS..??

    Lets take a closer look what happens...when the virtual hardware is upgraded the Windows plug & play service will detect the new devices (new NIC) and look for the driver, this in turn kicks of the Vmware tools upgrader service wich handles the install and upgrade process, this will remove the current NIC and save the IP configuration and apply to the NIC when VMware tools installs the drivers.

    The problem you have here is Windows will sometimes hold onto the old NIC as ghost device and when Windows reboots, core services that bind to the TCP/IP stack on the card ie DNS,IIS,AD will often try to attach themselves to the ghost NIC hence the hung state.

    OK whats the solution... you will ned to show the hidden NIC and then delete it.

    Use start >run>set devmgr_show_nonpresent_devices=1


    This will show the hidden device in Windows device manager, which you can delete

    The above command will work fine in Windows 2003 for all 2008 systems  Highlight ‘Server Manager (%SERVERNAME%)’ in the left-side tree-view pane. Click ‘Change System Properties’ in the right-hand pane. Switch to the Advanced tab and click ‘Environment Variables. Create a new System variable by clicking the New button. The Variable name should be ‘devmgr_show_nonpresent_devices’ and the value should be ‘1′ as pictured below.




    Reopen devmgmt.msc Click > View Show Hidden Devices and remove the ghost NIC

    Good luck!
     
    Copyright 2009 Virtually Anything. Powered by Blogger Blogger Templates create by Deluxe Templates. WP by Masterplan