Subscribe
Author

ANUNTA

Browsing

According to Brian Madden,  VDI is not the silver bullet folks expect it to be. The two major misconceptions highlighted being:

  • With desktop virtualization one can avoid managing windows desktops
  • With desktop virtualization, you virtualize the apps and virtualize the user environment, and then there’s nothing left to manage

Brian further explains how desktop virtualization is inextricably linked to Windows 7.

A lot has been said about the challenges or myths about VDI and conclusions are being made basis these. While these discussions kindle a constructive thought, they also scare away new users by detailing one complexity after another. Here’s our take on them:

First of all, the organization should be ready for a real transformation if VDI has to be adopted. If the intention is to manage everything as it’s being managed currently then most of the challenges being talked about on blogs and online forums will be true. The fundamental change is that VDI is moving control from the end-point to the datacenter.

Traditionally, a lot of discipline has been adopted in datacenter management as most of the control lies with the IT team. Few years ago several blogs spoke about how the virtual server concept would fail and never take off. Questions were raised about hardware being shared, driver issues, memory allocation, storage, etc. Today, nobody questions server virtualization capabilities; almost every organization has attempted it or is using it on a large scale. Also, comparing speed of adoption of server virtualization to that of desktop virtualization is incorrect. Desktops are tightly integrated with end-users. More than technology, it’s a perception play and organizations should be ready to embrace it.

When we adopted this solution earlier, we faced questions about the cost effectiveness of VDI (which was not seen as optimal), ease of management, etc., but realized that we were trying to compare VDI to the bottom most layer of the desktop and not looking at it as a broader solution which can deliver much more than the existing desktop back then. Speaking of compliance and security, many Desktop IT teams are struggling to manage tough compliance requirements, facing audit after audit forcing them to streamline the end-point solution, protect critical data on desktops, complicated policies and scripts. The way out for these are stop-gap solutions or deploying enterprise-wide complex applications which would be used only for addressing about 5 – 10% of the issues they were supposed to take care of.

The effort and investment needed for these are not attributed to desktop costs, rather they all become part of the information security budget. Isn’t it logical to say that the current desktop is not capable of protecting itself and hence we need to look out for solutions? If yes, then why are these costs not attributed to desktop costs? On the contrary, migrating to VDI brings about 70-80% of compliance without intervention of any additional application or technology. Are we consciously crediting VDI for this? Great desktop management tools and solutions do exist today. But even then,  the need to manage each end-points still persists. The accuracy of patching, achieving standardization in the hardware/software configuration, application rollout is not an easy task for desktop engineers. VDI brings down this complexity and masks the hardware variation and provides a wonderful application layer that is completely standardized. While patching is still needed in VDI, it does reduce the volume of patching by using the right templates.

VDI management is about managing 1 desktop vis-à-vis 500 desktops. If enough time is spent in designing and planning then the manageability of VDI can be lot simpler than actual desktops. At times, IT teams are challenged about their so called obsession with “VDI” and trying to make it work in whatever form. The answer is ‘No’ , because the audience you’re going to face is end-users and they are smart enough to know what works best for them. The concept of VDI is not new, the logic behind sharing a common infrastructure platform has been around for many years. The evolution of many such technologies like client server architecture, terminal services, application virtualization, etc., are driving the single point agenda of how effectively one can deliver application to the end-users.

We should continue to look at solutions that deliver application to end-users using various methods and tools. Also, VDI shouldn’t be compared to replacing a desktop but the complete chain of things which contribute towards end-user experience management (EUEM).

End-user performance management is very critical to making VDI a successful initiative. From an end-user standpoint, the user is looking for maximum efficiency and is not concerned about HOW that is achieved or WHAT technology is used. Just like how a mobile phone user does not care about whether his phone uses GSM or CDMA technology as long as it solves its intended purpose.

Frequently, business heads and teams resist VDI based on the fact that the familiar box near them has been taken away. We saw a lot of resistance when we rolled out VDI a couple of years ago, but we found a solution to prove and measure its performance. Eventually, we made these performance metrics available for all to see so that new users who challenge VDI have reliable data to refer to.

The approach we have adopted is a combination of technology and processes. Our monitoring architecture started from the end-user application metrics and moved up the layer to the actual VDI in the data center (contrary to the traditional approach of just looking at performance counters). With this approach, we were able to easily relate the application performance at the end-user level to the dependent parameters of central infrastructure. We created business views that brought in all the dependent infrastructure together but still faced the challenge of simulating actual end-user experience.

We then developed application simulators that could schedule the application access at certain periods of the hour and feed the performance numbers (equivalent to typical use case scenarios and keystrokes of the users). This was again interlinked to the various system thresholds like Network, WAN, SAN IO, Virtual platform and ended up with the final VDI session performance tracking. Any deviation in the threshold would highlight the possible causes which are being monitored 24/7 by the NOC team. With this, we have been able to consistently achieve user satisfaction as well as start delivering application performance guarantees to our customers – and free business heads and end-users of their VDI-related fears in the process.

Visit www.anuntatech.com to know more about our latest End-User Computing offerings.

FAQs

How is VDI performance measured?

The VDI performance is measured as per the following end-user experience metrics.

  • Logon duration: Users expect to access their desktop immediately after they enter the password.
  • App load time: Users are looking for a shorter load time for their apps.
  • App response time: When end-users are working within an application, they don’t want to stop and wait for the application to catch up.
  • Session response time: It is a measure of how well the OS responds to the user input.
  • Graphics quality and responsiveness: Users expect to have the same graphical experience that they would have on a physical desktop.

What is VDI used for?

VDI finds unique applications for the following use cases.

  • Many companies implement VDI as it makes it easy to deploy virtual desktops for their remote workers from a centralized location.
  • VDI can be ideally used in enterprises that work with the BYOD concept and allow their employees to work on their own devices. As processing is done on a centralized server, VDI can be implemented for a wide range of devices while ensuring adherence to security policies. The data is stored on the server, and hence, there is no chance of data loss.
  • In case of task or shift work in organizations like call centers, non persistent VDI can be employed. A large number of employees can use a generic desktop with software that allows them to perform limited and repetitive tasks.

What is VDI as a service?

When VDI is offered as a service, a third-party service provider manages the virtual infrastructure for you. The VDI user experience is offered to end-users along with all the applications necessary for work as a cloud service. The service provider also assumes the responsibility of managing the desktop infrastructure for the end-user thereby ensuring faster software updates, migrations, user provisioning, and ensuring better data security and disaster planning for businesses. Consequently, organizations can ease up their administrative operations and minimize IT-related overheads.

What is VDI, and how does it work?

VDI or Virtual Desktop Infrastructure is a virtualization technology in which virtual machines are used to deliver and manage virtual desktops. VDI separates the OS, applications, and data from the hardware and provides a convenient and affordable desktop solution over a network. The desktop environments are hosted on a centralized server and deployed to the end-user devices on request.

A VDI uses a hypervisor server that runs on physical hosts to create virtual machines. Further, they host virtual desktops that users can access from their devices remotely.

A connection broker is necessary for any desktop environment. The program acts as a single point of operation for managing all the hosted resources and offers end-users login access to their allocated systems. The virtual machines, applications, or physical workstations will be available to the users based on their identity and location of the client device.

VDI can be persistent or nonpersistent. With persistent VDI, users access the same desktop every time they log in. Here, the changes are saved after the connection is reset. On the other hand, non persistent VDI lets users connect to generic desktops. It is used in firms where a customized desktop is not necessary, and the nature of work is limited and repetitive.

It is common knowledge that cloud infrastructure adoption by businesses in India is seeing an encouraging trend. According to a recent study by EMC Corporation and Zinnov Management Consulting, the cloud computing market in India is expected to reach $4.5 billion by 2015, a little more than ten times the existing $400 million market. Also, the same study states that the cloud industry in India alone is expected to create 1 lakh additional jobs by 2015.

Additionally with the rollout of the UID programme, the technology has also found its way into the Government sector. This would lead one to think that cloud infrastructure has entered mainstream and is getting increasingly preferred over the legacy IT model. Yet, industries such as BFSI that have strong regulatory and compliance requirements have understandable reservations about cloud infrastructure.

Per a latest research by ValueNotes on behalf of Anunta, to study the application performance management in the Indian BFSI sector, 76% of the respondents surveyed still have physical delivery architecture. Of them, insurance and financial institutions are open to considering cloud infrastructure in future, but banks appear to be hesitant. The study further found that only 12% of respondents had some applications on cloud. But the core applications were still maintained in physical architecture. Why so? Resistance to change, security concerns, lack of reliable vendors et al were some of the reasons cited for not moving to cloud infrastructure.

But the benefits of cloud infrastructure, in this case for the BFSI sector, far outweigh the concerns. Banks and financial institutions, for many decades, have made use of service bureaus or outsourced core banking platforms. The ever increasing range of cloud computing options provides an opportunity for them to reduce their internal technology footprint and gain access to technology built and operated by third party experts. Also many investment banks and buy-side firms such as hedge fund houses have their private grid infrastructure for functions such as Monte Carlo simulation and risk analysis hosted in a third party data centre. Yet more often than not, they are required to add capacity in a jiffy at critical points. Cloud can prove more than handy at such moments.

Apart from the near halving of costs, applications like customer relationship management (CRM) and risk management can be brought to market relatively quicker. Banks can focus on their core business as opposed to concerning themselves about infrastructure scalability. Not to mention, the disaster recovery issues. I could go on about more advantages in the form of rapid provisioning and scaling of services alongside the chance to go green and contribute to the environment. But you get the point.

It’s not that the financial services industry is completely averse to cloud computing and its charms. In the survey, 43% have opted to take this decision in the next 5 years. Some of the premier private banks in our country have already adopted both private and public cloud, although, most of these banks have only hosted the peripheral applications on the cloud. But that in itself is hopeful.

According to the data collected from various industries, IT/ITeS is the top contributor to the total cloud infrastructure market in India with 19 per cent, followed by Telecom at 18 per cent, BFSI at 15 per cent, manufacturing at 14 per cent and government at 12 per cent. So BFSI has done well. But this post suggests that the data can be bettered.

Anunta tasked one of the research agencies to study the state of application performance management and monitoring in the Indian BFSI sector including banks, AMCs, insurance companies and brokerages. While some of the findings were what we expected, there were some interesting contrasts that the study threw up in terms of what these companies are saying and what the ground realities really are. As a first in a series that we hope to post on this survey, we’ll delve on some of BFSI’s top technology joys and sorrows.

WHAT THEY SAY: The survey found that given the rapid evolution and adoption of new technologies and applications for example, cloud computing, e-payments, mobile payments etc, respondents felt that challenges would only increase. This is a big pain point for 70% of the CTOs/IT heads that we spoke to.

BUT: It’s not something the sector can hide from. Take mobile payments for example – according to RBI figures, the volume and value of funds transferred through national electronic funds transfer (Neft) has been doubling almost every year. In 2010-11, the volume of funds transferred through Neft doubled to 13.23 crore and the value of transactions too doubled to Rs 9,39,149 crore.

Our take: All of the technologies cited above are ones that we believe will drive the future of technology infrastructures. Platforms like e-payments and mobile payments will mean added complexity to the application architecture which is already relatively under-managed at present. What this will mean is a higher level of integration and introducing new ways of monitoring enterprise and applications performance, all the while keeping a strict control on capex and costs.

WHAT THEY SAY: 76% of the respondents said they use automated tools to measure application performance.

BUT: 53% admit that there is a consensus between IT and end user measurement.

Our take: Measuring application performance is not enough. It needs to be measured from an end user perspective by translating technical SLAs into end-user SLAs and then enforcing these across the organization. However a dissonance between the IT organization and end-users on what a good measurement metric is, impedes the process.

WHAT THEY SAY: CTOs understand the importance of end user monitoring and 83% of the respondents measure the performance from end user side.

BUT: The metrics employed, capture end user experience broadly, but do not provide any detailed assessment of performance. They depict a reactive approach towards monitoring and are used mainly for incident reporting. Parameters include response time, application downtime, number of problem tickets and in a large number of cases, just end-user feedback.

Our take: While 83% seems like a healthy number, based on our experience in this sector, we also know that it is based on device level SLAs. There is an urgent need to become proactive in the approach towards end-user experience and issues monitoring. End-user SLAs when combined with technologies such as virtualization, allow SLA defaults and issues to be identified before they occur.

WHAT THEY SAY: It was observed that the loss in employee productivity was measured in terms of No. of volumes/ No. of people/ No. of hours lost due to incidents that cause a dip in application performance. Productivity losses are significant if the issue is unresolved in 30 minutes to 1 hour or when the network is unavailable for the entire day. In those cases, the productivity losses could go up to 30%.

BUT: Almost 56% of respondents agreed that they do not measure the business impact of lower application performance.

Our take: This is one of the biggest bugbears from an application performance management and measurement standpoint. Given that IT departments are often focused on keeping their TCOs low and ensuring system’s are running adequately (not perfectly), what they’ve lost sight of is the fact that IT needs to align to business. A drop in IT/ application uptime can mean significant revenue loss and brand erosion brought about by dissatisfied employees and customers. We’d addressed this issue in one of our earlier blogs but we expect it to remain an issue for a few more years.

It is often said that even if you are successful at server virtualization, it does not mean you can succeed at desktop virtualization. Being an End User Computing specialist that uses desktop virtualization as the underlying technology, we come across at least an enterprise a week that thinks adopting desktop virtualization is just a matter of buying a VDI license and getting an SI to do the job. Another couple of enterprises who are mystified why their POC failed to scale and the worst are the cases where having failed to make it work the IT team has now written off the technology as one “that does not suit us”.

At a practical level, what organizations don’t realize when it comes to static v/s dynamic workloads is that servers are concentrated within the datacentre and IT controls it completely. On the other hand, desktops are distributed and non-standardized. Let’s not forget that as you move users from a personal machine into the perimeter of the datacentre, you leave them with a dumb terminal and effectively less control over their computing environment. Hence, unless you have factored in the overall change management process for users you will encounter problems.

It is absolutely important to know why you’re virtualizing or what is the solution best suited to your requirements. Technology for the sake of simplifying the IT function or even reducing cost is rarely as powerful as directly giving business increased productivity and this is where the end-user becomes critical. Virtualization, for the sake of being cooler only goes to prove that IT is not aligned with business.

A fundamental transformation is needed in VDI management. Organizations need to realize that it’s not desktop management anymore but a cluster of high performing servers with storage systems and this requires a skilled team to manage.

In our experience, it really doesn’t matter whether you’re trying to manage multiple roll outs, as long as you’ve chosen the right partner to help you implement it. In fact, we’ve had the opportunity to implement exactly such a concurrent roll out at Ratnakar Bank and found that we are able to minimize the upheaval and maximize the end-result in terms of increased productivity.

Scaling is critical when it comes to planning your VDI deployment. VDI planning involves right compute, storage, memory, network and application integrations for the right application performance. The sizing for a batch of 150 – 200 users vs 1000 – 2000 users will be dramatically different. The server enclosures, storage models, etc., vary for different scale of requirements and a clearly thought through modular approach has to be adopted. This is also by far one of the most critical junctures at which IT needs to align with business; in being able to map on to business growth projections to plan future ramp ups.

It’s not just the hardware that you’re testing VDI on. It’s your actual application and user environment with diverse possible business scenarios, so you know for sure what the exact sizing (including users/core, IOPs for storage, etc.) is. Many industry benchmarks won’t fit the actual environment. A detailed analysis of the current workload, capacity changes over the weeks, peak volume hours, days, month etc should be considered during this planning.

All things considered, what we can say for sure based on our experience of implementing VDI over the last decade for various corporates is that implementing VDI requires a partner with specific and proven track records. Before selecting a partner, check that and remember having done server virtualization does not count as sufficient track record. We believe that the skills required to manage VDI involve domain experience of virtualization technology & products. Many VDI implementations fail because of a gross underestimation of the skills needed in this domain. It’s probably not a bad idea to bring in an expert.

In our line of business, all too often we come across IT infrastructure heads who say they are currently looking at virtualizing their applications and may look at virtualizing desktops later (if needed). I find this odd because it is not an either or decision.

Let’s first understand what application virtualization is and why organizations invest in it. It is a solution that insulates running programs from the underlying desktop. The idea is to eliminate the configuration problems that conventional desktops are plagued with and that require high support resources. In essence, the interactions between the OS and a given program are virtualized. This is particularly useful when the end-users need a specific set of applications to perform predictable tasks with minimal variance or access controls. A good example would be at the branch of a bank that typically handles a set type and number of average transactions. This however, still leaves a need for personal computing capacity on the desktop.

Compare this with cloud infrastructure and desktop virtualization which takes the entire desktop and puts it in the datacenter (put crudely) with the desktop becoming nothing but a thin client. This enables broader level of access to complex business applications even on the move, and is hence better suited for power users / management.

Clearly therefore, both these technologies serve very different purposes and application virtualization cannot become a proxy for desktop virtualization or vice-versa. If it’s the budget that’s causing you to take this phased approach, then remember, application virtualization maybe cheaper but it is also restricted in the capabilities and visibility that it provides to power users. Deploying it across the organization will mean that VDI will inevitably become necessary when a more sophisticated level interaction with applications is required.

Consequently, the thinking around virtualization needs to change. Enterprises should be looking to use cloud infrastructure power to maximise end-user satisfaction, productivity and revenue generation. Hence, if the end-goal of your cloud infrastructure strategy is to maximise productivity, increase IT efficiency and responsiveness and increase enterprise security, application virtualisation is a step in your cloud adoption journey which eventually has to proceed to user session and desktop virtualization in order to realise your organizational goals.

When planning your investment in Virtual Desktop Infrastructure (VDI) it is important for organizations to should consider: whether they need it, the cost benefit analysis, and what parts of their infrastructure to virtualize, etc. Another interesting aspect to consider is – how the journey of migrating desktops to cloud impacts the application delivery model. As a result, I think it merits taking this conversation one step further to discuss what you should consider from an application delivery standpoint when moving to VDI. Let’s continue our discussion about 7 building blocks of an efficient application delivery model.

1. Identify the value chain: While this seems like a no-brainer, it is important to identify the enterprise value chain right from the data center to the end-point, where the end-user consumes the application.

What this means: With diverse application needs for different categories of users i.e. mobile users, transaction users, quasi mobile users, etc., it demands identifying all the key components in the critical path for each of the user segments.

2. Manageability of applications: Identifying the value-chain needs to be closely followed by manageability at the component level. While data centers and networks are managed as part of keeping-the-lights-on operations, end-points have almost always been monitored and managed only to the extent of provisioning device uptime. For all practical purposes, they have been considered the output mechanism that doesn’t impact application performance. Desktop virtualization is helping here; it allows you to push the envelope for end-to-end manageability of application performance.

What this means: VDI brings manageability to each component of the application delivery value chain; however, manageability paradigms change.

3. Monitoring: Monitoring here is reference to a proactive approach to ensuring optimum application performance to end-users. At present, organizations  monitor performance at the component level without really monitoring the service level that they cumulatively contribute towards.

What this means: This comprehensive monitoring ensures that your systems are geared to proactively identify issues before they hit users. So application delivery is optimized at the end-user level.

4. Skills realignment: Typically enterprise architecture and its components work in silos where every component has a specialist managing and monitoring it. What this often leads to is reporting of optimal component level performance without actually accounting for the cumulative effect of their performance on service levels. So while specialists are necessary, it becomes crucial from a VDI standpoint to have a robust incident diagnostic team to complement it.

What this means: While the end-user never really had a custodian/specialist in charge of its optimal functioning and uptime, the incident diagnostic team becomes the umbrella organization that covers it.

5. Centralization: This is among the most elementary building blocks of VDI from an application delivery standpoint. One that ensures that your support skills are centralized given that the intelligence has moved to a central location and end-points are dumb terminals.

What this means: What this means to application delivery is the ability to manage, monitor and support applications, their uptime/downtime centrally.

6. Security: In a traditional application delivery framework, isolated security measures are created for data centers (perimeter security approach for data in store), networks (encryption for data in transit) and end-points (role based restrictions and controls for data access). VDI provides a lot of inherent security benefits; one of the key advantages is to ensure data doesn’t travel outside of data center. It is important that data doesn’t leave the data center, therefore security controls need to be realigned by creating different zones within the data center.

What this means: What this means to app delivery is ability to design a cohesive security framework for application delivery while saving costs.

7. Network Architecture: Designing network architecture has always been tricky with traditional desktop setups, as it requires planning for peak traffics while ensuring control over peak to average ratio. VDI helps in reducing peak to average ratio drastically and optimize network architecture with deterministic bandwidth requirements. VDI requires carefully planning QoS within your networks – as it replaces a lot of asynchronous network traffic with interactive ones.

What this means: This means a far more efficient and cost effective network that intelligently prioritizes the traffic based on its needs and is far less dependent on traffic loads.

If you’ve got these fundamental 7 pillars addressed, your Virtual Desktop Infrastructure implementation approach can be considered as evolved and would ensure application delivery is optimized to provide the best end-user experience.

Converged Infrastructure (CI) is all the rage and quite a topic of discussion. And whether one calls it “unified computing”, “fabric-based computing”, or “dynamic infrastructure”, there’s no escaping the literature from the hardware community. Simply put, Converged Infrastructure bundles servers, storage, networking devices and software for infrastructure management all contained in one unit.

At this point it might be good to ask, why the need for converged infrastructure? The answer lies in the wide gap between legacy IT stacks and the needs of virtual workloads. As siloed physical storage and network assets lack the optimization to support virtual servers; IT administrators tend to over provision resources. As more and more workloads get virtualized and the data associated with workloads grows, the IT environment cannot keep pace. Installing /loading more hardware only adds more complexity and cost, and doesn’t address the real problem. For example, consider storage. The local dependency between a physical server and storage shifts to a many-to-one relationship between virtual machines and a single storage device. Multiplexing the different I/O streams of multiple workloads by the hypervisor creates random I/O streams that must compete for resources. This increases IOPS required to service the virtual workloads. To address performance issues, IT administrators may add more disk spindles. However, this also increases capacity. This over-provisioning leads to a higher cost per gigabyte of storage allocated to every virtual machine. This is true of other parameters like capacity and mobility as well.

Here is precisely where converged infrastructure helps. Converged Infrastructure allows you to design, build, and maintain segments of the virtualization stack, while supporting an on-demand growth model.

The first wave of CI providers have been vendors with independent legacy server, storage and networking components. They delivered Converged Infrastructure consisting of pre-racked and cabled solutions based on a reference architecture. The next wave of converged infrastructure aggregates compute, storage and SAN functionality into modular appliances based on commodity x86 hardware that scales out by adding appliance “nodes.” Centralized management and policy configuration at the virtual machine level contribute to lower operational costs. This single-vendor converged infrastructure solution also streamlines deployment, and lowers acquisition costs – however even this system did not fully address the performance and capacity issues. It still uses inefficient methods of writing and reading data to and from storage. Nor does it consider the challenges of capturing, storing, transferring and recovering data copies for backup and disaster recovery.

The third wave of CI – HyperConverged Infrastructure (HCI) holds the most promise. It addresses the performance, capacity, mobility, and management issues prevalent in previous waves of converged infrastructure. It achieves VM-centricity by tracking what data belongs to which virtual machine, enabling VM-mobility. By eliminating redundant read and write operations, HCI achieves performance efficiency. It achieves capacity efficiency by reducing the “footprint” of data on production and backup storage via de-duplication, compression and optimization of data at inception. HCI infrastructure promises to dramatically reduce total cost of ownership by eliminating siloed technology, enabling rapid application deployment, reducing labor-intensive activities, preventing over-purchasing and over-provisioning and maximizing the infrastructure investment. The data efficiency introduced with de-duplication, compression and optimization also improves performance.

It helps build a private cloud-computing environment that delivers the capabilities of large cloud service providers within our own IT environment. Because it combines data center components into an appliance form factor, and adds centralized management and the ability to scale – it eliminates compatibility issues typical in a legacy, siloed environment. Furthermore, it streamlines purchasing, deployment and use. Emulating the architecture, processes and practices of the top cloud service providers starts with converged infrastructure, it can reduce complexity, cycle time and costs, thus allowing on premise private cloud services economics to work in IT organization favor.

We must conclude by saying that today though Anunta has created private cloud environments for thousands of desktops for clients in Financial services, manufacturing and Business processing we have not opted for “off-the-shelf” CI products. We have preferred to customize based on the users. Our experience also is, and we speak here as an IT provider to Indian companies in the Indian market, adoption of CI products is low for the following reasons:

  • Cost of the solution is at least 30 – 40% higher than traditional Infra approach
  • CI currently lacks flexibility when it comes to scaling up specific requirements of compute alone or storage alone and this forces clients to look for the total solution
  • CI looks lucrative in green field implementation for small and medium companies, who want to start the VDI Journey in sectors like Training, Healthcare, Manufacturing and Microfinance sector but for the existing infra, re-usability of the storage or network could become a point of concern

Finally we do firmly believe that CI will be the way of adopting to the new age data centers and OEM’s will start aligning themselves to match client expectations on price and flexibility.

Desktop virtualization has been making strong inroads in today’s workforce. Success stories abound on how firms are benefiting from the move to desktop virtualization. Right from tellers at banks to customer service executives at BPO’s to mobile business executives – it has brought about a difference in IT operations and provided tangible benefits. However, there have rarely been examples surrounding desktop virtualization for power users. Power users are typically coders, testers, graphic artists, designers, scientists requiring complex calculation outputs, research analysts etc. Basically, anyone requiring intense compute and storage power with a highly customizable system in tow.

The reason isn’t hard to find. The need for high processing power, storage, non-standard, highly individualistic & customized nature of each users system has its own implementation challenges. Add to it the expectations of users for a performance akin to, if not exceeding, the normal desktop environment that they are used to.

However in our experience there is no reason why power users have to be devoid of the benefits that desktop virtualization offers. The challenge lies in delivering high compute & storage without negating the business case. An added challenge is delivering IT simplicity & efficiency without compromising on customization needed.

We undertook a similar  exercise for a global managed analytics provider. The firm provides services in market research, retail and brand pricing. Users were made up primarily of three major types:

  1.  Analysts, who did client research & generated reports out of reams of data.
  2. Programmers, who created custom built applications suited for each client engagement.
  3. Testers, who tested the applications developed on different platforms & use case scenarios.

Employees use the proprietary big data platform as well as over 130+ applications including IBM SPPS, IBM Dimension 6, Flowspeed, FTP, and several different industry platforms to deliver results. In addition, several applications had customized outlook plug-ins. Its technology infrastructure can best be described as one requiring high compute & storage, non-standard & highly customized yet one requiring accelerated provisioning, flexibility of ramp-ups and ramp downs and rapid roll-outs and updates.

In such a scenario while it is normal for VDI implementers to focus on uniformity & standardization, a one size fits all approach to VDI implementation just doesn’t apply. On the other hand, to address the individualistic customization needs, one might end up over-provisioning resources which can lead to a VM sprawl where it becomes difficult for the administrator to manage efficiently. Although VMs are easily created, they have the same licensing, support, security and compliance issues that physical machines do, this can defeat the gains of virtualization. The skill lies in walking the fine line between customization & standardization. We are glad to report, the company is currently considering expanding their virtualized set up.

Virtualization has several benefits – reducing TCO, improving application delivery and end-user experience being the more significant ones. The key to reap these benefits lies not only in realizing the type of virtualization best suited to your business requirements, but also in choosing the right technology partner capable of guiding you through and beyond your infrastructure overhaul.

We came across an interesting blog post  that discusses performance management on the cloud and the toss up between public and private clouds. You can read it here:

Why Performance Management is Easier in Public than On-Premise Clouds — Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud.

The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment.

How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general and clouds in particular – pose to the realm of APM.

We believe performance management will become easier in private clouds rather than public. This is mainly because it needs to be remembered that the two different group who manage infrastructure in Public clouds can also be siloed and this could result in a number of performance problems for end-users. So whether public or private, it is critical that all the dependant factors are woven together and pro-actively monitored.

I believe the basis of performance management has to start with end-user experience management. Unfortunately most approaches to monitoring are inward focused and they don’t really look at what effect breaches of various system thresholds have on end-users. Admittedly, it’s not easy to put in place a system which consistently ensures end-user performance is measured but it’s also not that complex if attempted through proper process charts. We have repeatedly seen customers having a non-integrated performance management, which ends up like a reactive system because in fact what’s getting monitored has no relation to what’s been delivered to the end-users.

My recommendation would be that be it private or public cloud start your performance management from end-user and move up the ladder to the data center. Connect all the points, identify dependencies, define relative thresholds (relate them to what kind of impact this will have on the end-user) and create an agent-less system to monitor end-user experience. This has proven effective for us both in Private & Public cloud application usages. It can become a lot easier in private cloud as we can have a single integrated system which can connect them all, in public cloud we might be restricted by different methods employed to measure performances by different vendors.

SUBSCRIBE TO OUR BLOG

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.