Subscribe
Author

ANUNTA

Browsing

It is often said that even if you are successful at server virtualization, it does not mean you can succeed at desktop virtualization. Being an End User Computing specialist that uses desktop virtualization as the underlying technology, we come across at least an enterprise a week that thinks adopting desktop virtualization is just a matter of buying a VDI license and getting an SI to do the job. Another couple of enterprises who are mystified why their POC failed to scale and the worst are the cases where having failed to make it work the IT team has now written off the technology as one “that does not suit us”.

At a practical level, what organizations don’t realize when it comes to static v/s dynamic workloads is that servers are concentrated within the datacentre and IT controls it completely. On the other hand, desktops are distributed and non-standardized. Let’s not forget that as you move users from a personal machine into the perimeter of the datacentre, you leave them with a dumb terminal and effectively less control over their computing environment. Hence, unless you have factored in the overall change management process for users you will encounter problems.

It is absolutely important to know why you’re virtualizing or what is the solution best suited to your requirements. Technology for the sake of simplifying the IT function or even reducing cost is rarely as powerful as directly giving business increased productivity and this is where the end-user becomes critical. Virtualization, for the sake of being cooler only goes to prove that IT is not aligned with business.

A fundamental transformation is needed in VDI management. Organizations need to realize that it’s not desktop management anymore but a cluster of high performing servers with storage systems and this requires a skilled team to manage.

In our experience, it really doesn’t matter whether you’re trying to manage multiple roll outs, as long as you’ve chosen the right partner to help you implement it. In fact, we’ve had the opportunity to implement exactly such a concurrent roll out at Ratnakar Bank and found that we are able to minimize the upheaval and maximize the end-result in terms of increased productivity.

Scaling is critical when it comes to planning your VDI deployment. VDI planning involves right compute, storage, memory, network and application integrations for the right application performance. The sizing for a batch of 150 – 200 users vs 1000 – 2000 users will be dramatically different. The server enclosures, storage models, etc., vary for different scale of requirements and a clearly thought through modular approach has to be adopted. This is also by far one of the most critical junctures at which IT needs to align with business; in being able to map on to business growth projections to plan future ramp ups.

It’s not just the hardware that you’re testing VDI on. It’s your actual application and user environment with diverse possible business scenarios, so you know for sure what the exact sizing (including users/core, IOPs for storage, etc.) is. Many industry benchmarks won’t fit the actual environment. A detailed analysis of the current workload, capacity changes over the weeks, peak volume hours, days, month etc should be considered during this planning.

All things considered, what we can say for sure based on our experience of implementing VDI over the last decade for various corporates is that implementing VDI requires a partner with specific and proven track records. Before selecting a partner, check that and remember having done server virtualization does not count as sufficient track record. We believe that the skills required to manage VDI involve domain experience of virtualization technology & products. Many VDI implementations fail because of a gross underestimation of the skills needed in this domain. It’s probably not a bad idea to bring in an expert.

In our line of business, all too often we come across IT infrastructure heads who say they are currently looking at virtualizing their applications and may look at virtualizing desktops later (if needed). I find this odd because it is not an either or decision.

Let’s first understand what application virtualization is and why organizations invest in it. It is a solution that insulates running programs from the underlying desktop. The idea is to eliminate the configuration problems that conventional desktops are plagued with and that require high support resources. In essence, the interactions between the OS and a given program are virtualized. This is particularly useful when the end-users need a specific set of applications to perform predictable tasks with minimal variance or access controls. A good example would be at the branch of a bank that typically handles a set type and number of average transactions. This however, still leaves a need for personal computing capacity on the desktop.

Compare this with cloud infrastructure and desktop virtualization which takes the entire desktop and puts it in the datacenter (put crudely) with the desktop becoming nothing but a thin client. This enables broader level of access to complex business applications even on the move, and is hence better suited for power users / management.

Clearly therefore, both these technologies serve very different purposes and application virtualization cannot become a proxy for desktop virtualization or vice-versa. If it’s the budget that’s causing you to take this phased approach, then remember, application virtualization maybe cheaper but it is also restricted in the capabilities and visibility that it provides to power users. Deploying it across the organization will mean that VDI will inevitably become necessary when a more sophisticated level interaction with applications is required.

Consequently, the thinking around virtualization needs to change. Enterprises should be looking to use cloud infrastructure power to maximise end-user satisfaction, productivity and revenue generation. Hence, if the end-goal of your cloud infrastructure strategy is to maximise productivity, increase IT efficiency and responsiveness and increase enterprise security, application virtualisation is a step in your cloud adoption journey which eventually has to proceed to user session and desktop virtualization in order to realise your organizational goals.

When planning your investment in Virtual Desktop Infrastructure (VDI) it is important for organizations to should consider: whether they need it, the cost benefit analysis, and what parts of their infrastructure to virtualize, etc. Another interesting aspect to consider is – how the journey of migrating desktops to cloud impacts the application delivery model. As a result, I think it merits taking this conversation one step further to discuss what you should consider from an application delivery standpoint when moving to VDI. Let’s continue our discussion about 7 building blocks of an efficient application delivery model.

1. Identify the value chain: While this seems like a no-brainer, it is important to identify the enterprise value chain right from the data center to the end-point, where the end-user consumes the application.

What this means: With diverse application needs for different categories of users i.e. mobile users, transaction users, quasi mobile users, etc., it demands identifying all the key components in the critical path for each of the user segments.

2. Manageability of applications: Identifying the value-chain needs to be closely followed by manageability at the component level. While data centers and networks are managed as part of keeping-the-lights-on operations, end-points have almost always been monitored and managed only to the extent of provisioning device uptime. For all practical purposes, they have been considered the output mechanism that doesn’t impact application performance. Desktop virtualization is helping here; it allows you to push the envelope for end-to-end manageability of application performance.

What this means: VDI brings manageability to each component of the application delivery value chain; however, manageability paradigms change.

3. Monitoring: Monitoring here is reference to a proactive approach to ensuring optimum application performance to end-users. At present, organizations  monitor performance at the component level without really monitoring the service level that they cumulatively contribute towards.

What this means: This comprehensive monitoring ensures that your systems are geared to proactively identify issues before they hit users. So application delivery is optimized at the end-user level.

4. Skills realignment: Typically enterprise architecture and its components work in silos where every component has a specialist managing and monitoring it. What this often leads to is reporting of optimal component level performance without actually accounting for the cumulative effect of their performance on service levels. So while specialists are necessary, it becomes crucial from a VDI standpoint to have a robust incident diagnostic team to complement it.

What this means: While the end-user never really had a custodian/specialist in charge of its optimal functioning and uptime, the incident diagnostic team becomes the umbrella organization that covers it.

5. Centralization: This is among the most elementary building blocks of VDI from an application delivery standpoint. One that ensures that your support skills are centralized given that the intelligence has moved to a central location and end-points are dumb terminals.

What this means: What this means to application delivery is the ability to manage, monitor and support applications, their uptime/downtime centrally.

6. Security: In a traditional application delivery framework, isolated security measures are created for data centers (perimeter security approach for data in store), networks (encryption for data in transit) and end-points (role based restrictions and controls for data access). VDI provides a lot of inherent security benefits; one of the key advantages is to ensure data doesn’t travel outside of data center. It is important that data doesn’t leave the data center, therefore security controls need to be realigned by creating different zones within the data center.

What this means: What this means to app delivery is ability to design a cohesive security framework for application delivery while saving costs.

7. Network Architecture: Designing network architecture has always been tricky with traditional desktop setups, as it requires planning for peak traffics while ensuring control over peak to average ratio. VDI helps in reducing peak to average ratio drastically and optimize network architecture with deterministic bandwidth requirements. VDI requires carefully planning QoS within your networks – as it replaces a lot of asynchronous network traffic with interactive ones.

What this means: This means a far more efficient and cost effective network that intelligently prioritizes the traffic based on its needs and is far less dependent on traffic loads.

If you’ve got these fundamental 7 pillars addressed, your Virtual Desktop Infrastructure implementation approach can be considered as evolved and would ensure application delivery is optimized to provide the best end-user experience.

Converged Infrastructure (CI) is all the rage and quite a topic of discussion. And whether one calls it “unified computing”, “fabric-based computing”, or “dynamic infrastructure”, there’s no escaping the literature from the hardware community. Simply put, Converged Infrastructure bundles servers, storage, networking devices and software for infrastructure management all contained in one unit.

At this point it might be good to ask, why the need for converged infrastructure? The answer lies in the wide gap between legacy IT stacks and the needs of virtual workloads. As siloed physical storage and network assets lack the optimization to support virtual servers; IT administrators tend to over provision resources. As more and more workloads get virtualized and the data associated with workloads grows, the IT environment cannot keep pace. Installing /loading more hardware only adds more complexity and cost, and doesn’t address the real problem. For example, consider storage. The local dependency between a physical server and storage shifts to a many-to-one relationship between virtual machines and a single storage device. Multiplexing the different I/O streams of multiple workloads by the hypervisor creates random I/O streams that must compete for resources. This increases IOPS required to service the virtual workloads. To address performance issues, IT administrators may add more disk spindles. However, this also increases capacity. This over-provisioning leads to a higher cost per gigabyte of storage allocated to every virtual machine. This is true of other parameters like capacity and mobility as well.

Here is precisely where converged infrastructure helps. Converged Infrastructure allows you to design, build, and maintain segments of the virtualization stack, while supporting an on-demand growth model.

The first wave of CI providers have been vendors with independent legacy server, storage and networking components. They delivered Converged Infrastructure consisting of pre-racked and cabled solutions based on a reference architecture. The next wave of converged infrastructure aggregates compute, storage and SAN functionality into modular appliances based on commodity x86 hardware that scales out by adding appliance “nodes.” Centralized management and policy configuration at the virtual machine level contribute to lower operational costs. This single-vendor converged infrastructure solution also streamlines deployment, and lowers acquisition costs – however even this system did not fully address the performance and capacity issues. It still uses inefficient methods of writing and reading data to and from storage. Nor does it consider the challenges of capturing, storing, transferring and recovering data copies for backup and disaster recovery.

The third wave of CI – HyperConverged Infrastructure (HCI) holds the most promise. It addresses the performance, capacity, mobility, and management issues prevalent in previous waves of converged infrastructure. It achieves VM-centricity by tracking what data belongs to which virtual machine, enabling VM-mobility. By eliminating redundant read and write operations, HCI achieves performance efficiency. It achieves capacity efficiency by reducing the “footprint” of data on production and backup storage via de-duplication, compression and optimization of data at inception. HCI infrastructure promises to dramatically reduce total cost of ownership by eliminating siloed technology, enabling rapid application deployment, reducing labor-intensive activities, preventing over-purchasing and over-provisioning and maximizing the infrastructure investment. The data efficiency introduced with de-duplication, compression and optimization also improves performance.

It helps build a private cloud-computing environment that delivers the capabilities of large cloud service providers within our own IT environment. Because it combines data center components into an appliance form factor, and adds centralized management and the ability to scale – it eliminates compatibility issues typical in a legacy, siloed environment. Furthermore, it streamlines purchasing, deployment and use. Emulating the architecture, processes and practices of the top cloud service providers starts with converged infrastructure, it can reduce complexity, cycle time and costs, thus allowing on premise private cloud services economics to work in IT organization favor.

We must conclude by saying that today though Anunta has created private cloud environments for thousands of desktops for clients in Financial services, manufacturing and Business processing we have not opted for “off-the-shelf” CI products. We have preferred to customize based on the users. Our experience also is, and we speak here as an IT provider to Indian companies in the Indian market, adoption of CI products is low for the following reasons:

  • Cost of the solution is at least 30 – 40% higher than traditional Infra approach
  • CI currently lacks flexibility when it comes to scaling up specific requirements of compute alone or storage alone and this forces clients to look for the total solution
  • CI looks lucrative in green field implementation for small and medium companies, who want to start the VDI Journey in sectors like Training, Healthcare, Manufacturing and Microfinance sector but for the existing infra, re-usability of the storage or network could become a point of concern

Finally we do firmly believe that CI will be the way of adopting to the new age data centers and OEM’s will start aligning themselves to match client expectations on price and flexibility.

Desktop virtualization has been making strong inroads in today’s workforce. Success stories abound on how firms are benefiting from the move to desktop virtualization. Right from tellers at banks to customer service executives at BPO’s to mobile business executives – it has brought about a difference in IT operations and provided tangible benefits. However, there have rarely been examples surrounding desktop virtualization for power users. Power users are typically coders, testers, graphic artists, designers, scientists requiring complex calculation outputs, research analysts etc. Basically, anyone requiring intense compute and storage power with a highly customizable system in tow.

The reason isn’t hard to find. The need for high processing power, storage, non-standard, highly individualistic & customized nature of each users system has its own implementation challenges. Add to it the expectations of users for a performance akin to, if not exceeding, the normal desktop environment that they are used to.

However in our experience there is no reason why power users have to be devoid of the benefits that desktop virtualization offers. The challenge lies in delivering high compute & storage without negating the business case. An added challenge is delivering IT simplicity & efficiency without compromising on customization needed.

We undertook a similar  exercise for a global managed analytics provider. The firm provides services in market research, retail and brand pricing. Users were made up primarily of three major types:

  1.  Analysts, who did client research & generated reports out of reams of data.
  2. Programmers, who created custom built applications suited for each client engagement.
  3. Testers, who tested the applications developed on different platforms & use case scenarios.

Employees use the proprietary big data platform as well as over 130+ applications including IBM SPPS, IBM Dimension 6, Flowspeed, FTP, and several different industry platforms to deliver results. In addition, several applications had customized outlook plug-ins. Its technology infrastructure can best be described as one requiring high compute & storage, non-standard & highly customized yet one requiring accelerated provisioning, flexibility of ramp-ups and ramp downs and rapid roll-outs and updates.

In such a scenario while it is normal for VDI implementers to focus on uniformity & standardization, a one size fits all approach to VDI implementation just doesn’t apply. On the other hand, to address the individualistic customization needs, one might end up over-provisioning resources which can lead to a VM sprawl where it becomes difficult for the administrator to manage efficiently. Although VMs are easily created, they have the same licensing, support, security and compliance issues that physical machines do, this can defeat the gains of virtualization. The skill lies in walking the fine line between customization & standardization. We are glad to report, the company is currently considering expanding their virtualized set up.

Virtualization has several benefits – reducing TCO, improving application delivery and end-user experience being the more significant ones. The key to reap these benefits lies not only in realizing the type of virtualization best suited to your business requirements, but also in choosing the right technology partner capable of guiding you through and beyond your infrastructure overhaul.

We came across an interesting blog post  that discusses performance management on the cloud and the toss up between public and private clouds. You can read it here:

Why Performance Management is Easier in Public than On-Premise Clouds — Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud.

The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment.

How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general and clouds in particular – pose to the realm of APM.

We believe performance management will become easier in private clouds rather than public. This is mainly because it needs to be remembered that the two different group who manage infrastructure in Public clouds can also be siloed and this could result in a number of performance problems for end-users. So whether public or private, it is critical that all the dependant factors are woven together and pro-actively monitored.

I believe the basis of performance management has to start with end-user experience management. Unfortunately most approaches to monitoring are inward focused and they don’t really look at what effect breaches of various system thresholds have on end-users. Admittedly, it’s not easy to put in place a system which consistently ensures end-user performance is measured but it’s also not that complex if attempted through proper process charts. We have repeatedly seen customers having a non-integrated performance management, which ends up like a reactive system because in fact what’s getting monitored has no relation to what’s been delivered to the end-users.

My recommendation would be that be it private or public cloud start your performance management from end-user and move up the ladder to the data center. Connect all the points, identify dependencies, define relative thresholds (relate them to what kind of impact this will have on the end-user) and create an agent-less system to monitor end-user experience. This has proven effective for us both in Private & Public cloud application usages. It can become a lot easier in private cloud as we can have a single integrated system which can connect them all, in public cloud we might be restricted by different methods employed to measure performances by different vendors.

With mobility being the new employee mantra, IT teams are struggling to keep in step with the challenges that the mobile workforce brings. According to Gartner, by 2017 nearly 38% of organizations will embrace BYOD and stop providing devices to its employees. Our conversations with Indian CTOs tell us that mobility is a top concern while BYOD is still some way off, though it is being embraced in niche areas like for agency workforces in Insurance or salesforce in FMCG. But overall, Desktop based systems aren’t disappearing anytime soon and IT heads still have the huge inventory of PC’s that they regularly need to refresh to ensure productivity levels. So many CTOs may find themselves wondering, is it better to refresh my desktop or should I look at virtual desktop solutions and consider thin clients. The answer as always is…. Depends! We discuss three of the most common cases below.

PC REFRESH @ END OF LIFE

Most organisations follow a 5-year hardware refresh cycle but in India it is not uncommon to come across enterprises that will stretch that to 7 years or beyond! Essentially, it’s a case of if it ain’t broke don’t fix it! In such cases can the IT expect to establish a case for virtualized desktops rather than invest in new PCs? On the face of it, you don’t have to be a genius to say NO! But, these are precisely the cases where business is driving IT to go beyond replace. So can a business case be built in such cases? How do you compare a 35,000 PC price to the initial investment required to bring in the IT efficiency that virtual desktop solutions brings? Point is you need to compare apples to apples. Even if a PC costs you just 35,000, what does support cost? How much power does the PC consume? How much does a data breach cost? And so on. We have found that if IT can think business saving rather than IT saving, a business case can easily be built for replacing 500 desktops with virtualized desktops.

END OF SUPPORT TO XP

With support to Windows XP coming to an end, enterprises are saddled with multiple systems spread across the company that are now vulnerable to data loss and security breaches. Lack of support to XP may also mean issues with software compatibility which can lead to user dissatisfaction and productivity loss. In such cases CTOs are faced with the question of do they move to Windows 7 or use that as a trigger to transform delivery. As always, in many cases it will be driven by hard numbers. Migrating to Windows7 means investing in Windows 7 licenses and frequently hardware upgrades. Assuming you are ready, to spend on both, what is the residual life of the existing desktop is worth considering. We have found that the additional investment on license and the hardware upgrade, on an already sweated asset, makes little sense, especially when all that investment can come to naught if the PC itself starts to wear out. The same amount when directed towards virtual desktop solution initiative creates an opportunity to benefit from the IT efficiencies while postponing the need to change the PC. As and when the PC wears out it can be replaced with a lower cost TC with benefits of virtualization gained from the start.

NEW GROWTH OR BUSINESS EXPANSION

Many visionary CTOs have used business expansion as a trigger to transform. The business case here is not unlike that in the PC refresh case except that you need to factor in your organisation’s refresh cycle and attendant financials to those of a virtualized environment. In addition to the economics this is the “perfect” case to test a new and better technology. It also gives you a clean slate as far as end-users are concerned. So governance and culture around use of desktop & applications can be laid down with no baggage and comparisons to unlimited storage and downloads. With the right partner, IT heads are quickly able to demonstrate the many advantages that virtual desktop solutions can deliver and in our experience never look back.

ECONOMIC CASE versus BUSINESS CASE versus BUSINESS VALUE

Every CTO has to find a way to balance being a pragmatist and a visionary. Most CTOs understand that newer technologies such as desktop virtualization have benefits but convincing non-technical managements and boards means justifying the investment with a “business case”. In most cases unfortunately the term business case is usually used to convey the economic case while business value is ignored. Perhaps that is the key to getting support to your desktop virtualization initiative: emphasize the business value while showing you have done enough hard work on the economic case.

SUBSCRIBE TO OUR BLOG

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.