It has been well-documented that monitoring end user experience is critical. We touched upon this in our last blog. After all, the functioning of the delivery architecture is successful if the end user can complete his/her task smoothly. But as with every result, the means to achieve it matter too. In our fact finding study of the Indian BFSI sector, we found that most respondents did not measure end user performance comprehensively. One of the key factors they highlighted was the extra costs involved.

Measuring the end-user experience requires the IT team to deploy specialized end user monitoring tools. This incurs extra cost in terms of the tool purchase, bringing on board additional resources for monitoring and therefore, their training.

This is compounded by the issue of geographical spread as BFSI organizations widen their network from urban to semi-urban and rural areas. When applications are deployed to these regions, there is a lot of variance in the performance measures. This is because different internet service providers offer different service standards. Monitoring distributed applications encompasses a large and changing set of users, applications, types of measurements, and platforms, adding to the cost element.

Now add to this the interesting views put forward in Gartner’s Magic Quadrant for Application Performance Monitoring (APM) for this year. The report states that applications have become far more difficult to monitor owing to architectures, in general, becoming more modular, redundant, distributed and dynamic. This, in turn, changes the application code more frequently. The resultant web of complexities renders the traditional system/monitoring tools practically, useless. You can’t help but sympathize in a situation like this.

But I have to point out here that tools aren’t the only answer. Measuring application performance from an end-user perspective has a lot more to do with the way they are delivered. The system architecture and how it can be optimized to deliver applications and measure their performance. This is where technologies like VDI come handy. Not just because it creates a standardized operating environment and delivery architecture but also because it enables an organization to put in place enforceable SLAs that can be easily translated into end-user SLAs. In essence, a technology layer is optimized to deliver end-user intelligence by a process layer that defines the softer aspects.

In any organization, business is driven at two levels i.e. the strategic (within boardrooms) and the execution (by the end-user). That said, our recent study of APM in the Indian BFSI sector tells us that these business strategies being devised in corporate offices often fail to reach the end user and given that they are not involved in the process, expectations and perceptions often differ. However, between the two are a number of departments that are expected to help end-users achieve those business goals. IT is one such department and in today’s connected environment, probably infinitely more important than HR or operations. It is at this juncture that I’d like to draw attention to an interesting question recently asked by a consultant on LinkedIn – Why is it so hard to get IT departments to be more engaged in the execution of corporate strategic goals? Our Chief Operating Officer – Sivakumar Ramamurthy – answered this rather aptly.

  • Look at how applications are performing at the end-user level as compared to the broad enterprise network level
  • Translate business goals into actionable end-user metrics that make it easy to spot when an end-user is having trouble – this again draws on the fact that there isn’t always a clear direction from business to begin with

In essence, his views capture the state of IT departments across industries today. There is a desire to be considered a business partner but an understandable inability to THINK business. I don’t think that one can place blame here given that when IT first made its debut into everyday functioning, it’s aim had been to simply automate and later moved onto the higher goal of increasing productivity while lowering costs. It is this second stage beyond which most IT departments have not been able to move. This is an indication of the fact that while business leaders believe that IT is a boardroom subject, it still tends to be marginalized as a support function that is focused on saving costs.

With that kind of direction, it is not surprising that IT departments haven’t been able to scale up the way business expects it to. As a result, they’ve come to believe that their technical knowledge protects them even as things like the consumerization of IT have become a reality. Much like doctors had believed they were the final word until the internet opened up medical information to patients.

So this explains the business-IT disconnect, but what about the IT-end user disconnect? Our survey reveals that 53% respondents thought IT and end-users were always at odds with each other. What this says is that while end-users demand 100% uptime, IT is unable to deliver on this given its inability to do two things:
What you’re left with is three silos i.e. business, IT and end-users that don’t really speak with each other, understand each other and, look at each other as co-dependents and therefore, do not align with each other. So while business is looking to up EBITDA margins, IT thinks it can do this by reducing the TCO of its IT investments, all the while not really considering that if it can increase application uptime and reduce its cost of application delivery, then the end-user may actually become more capable of delivering on the business goals and IT can go from being an enabler to a revenue generator.

According to Brian Madden,  VDI is not the silver bullet folks expect it to be. The two major misconceptions highlighted being:

  • With desktop virtualization one can avoid managing windows desktops
  • With desktop virtualization, you virtualize the apps and virtualize the user environment, and then there’s nothing left to manage

Brian further explains how desktop virtualization is inextricably linked to Windows 7.

A lot has been said about the challenges or myths about VDI and conclusions are being made basis these. While these discussions kindle a constructive thought, they also scare away new users by detailing one complexity after another. Here’s our take on them:

First of all, the organization should be ready for a real transformation if VDI has to be adopted. If the intention is to manage everything as it’s being managed currently then most of the challenges being talked about on blogs and online forums will be true. The fundamental change is that VDI is moving control from the end-point to the datacenter.

Traditionally, a lot of discipline has been adopted in datacenter management as most of the control lies with the IT team. Few years ago several blogs spoke about how the virtual server concept would fail and never take off. Questions were raised about hardware being shared, driver issues, memory allocation, storage, etc. Today, nobody questions server virtualization capabilities; almost every organization has attempted it or is using it on a large scale. Also, comparing speed of adoption of server virtualization to that of desktop virtualization is incorrect. Desktops are tightly integrated with end-users. More than technology, it’s a perception play and organizations should be ready to embrace it.

When we adopted this solution earlier, we faced questions about the cost effectiveness of VDI (which was not seen as optimal), ease of management, etc., but realized that we were trying to compare VDI to the bottom most layer of the desktop and not looking at it as a broader solution which can deliver much more than the existing desktop back then. Speaking of compliance and security, many Desktop IT teams are struggling to manage tough compliance requirements, facing audit after audit forcing them to streamline the end-point solution, protect critical data on desktops, complicated policies and scripts. The way out for these are stop-gap solutions or deploying enterprise-wide complex applications which would be used only for addressing about 5 – 10% of the issues they were supposed to take care of.

The effort and investment needed for these are not attributed to desktop costs, rather they all become part of the information security budget. Isn’t it logical to say that the current desktop is not capable of protecting itself and hence we need to look out for solutions? If yes, then why are these costs not attributed to desktop costs? On the contrary, migrating to VDI brings about 70-80% of compliance without intervention of any additional application or technology. Are we consciously crediting VDI for this? Great desktop management tools and solutions do exist today. But even then,  the need to manage each end-points still persists. The accuracy of patching, achieving standardization in the hardware/software configuration, application rollout is not an easy task for desktop engineers. VDI brings down this complexity and masks the hardware variation and provides a wonderful application layer that is completely standardized. While patching is still needed in VDI, it does reduce the volume of patching by using the right templates.

VDI management is about managing 1 desktop vis-à-vis 500 desktops. If enough time is spent in designing and planning then the manageability of VDI can be lot simpler than actual desktops. At times, IT teams are challenged about their so called obsession with “VDI” and trying to make it work in whatever form. The answer is ‘No’ , because the audience you’re going to face is end-users and they are smart enough to know what works best for them. The concept of VDI is not new, the logic behind sharing a common infrastructure platform has been around for many years. The evolution of many such technologies like client server architecture, terminal services, application virtualization, etc., are driving the single point agenda of how effectively one can deliver application to the end-users.

We should continue to look at solutions that deliver application to end-users using various methods and tools. Also, VDI shouldn’t be compared to replacing a desktop but the complete chain of things which contribute towards end-user experience management (EUEM).

End-user performance management is very critical to making VDI a successful initiative. From an end-user standpoint, the user is looking for maximum efficiency and is not concerned about HOW that is achieved or WHAT technology is used. Just like how a mobile phone user does not care about whether his phone uses GSM or CDMA technology as long as it solves its intended purpose.

Frequently, business heads and teams resist VDI based on the fact that the familiar box near them has been taken away. We saw a lot of resistance when we rolled out VDI a couple of years ago, but we found a solution to prove and measure its performance. Eventually, we made these performance metrics available for all to see so that new users who challenge VDI have reliable data to refer to.

The approach we have adopted is a combination of technology and processes. Our monitoring architecture started from the end-user application metrics and moved up the layer to the actual VDI in the data center (contrary to the traditional approach of just looking at performance counters). With this approach, we were able to easily relate the application performance at the end-user level to the dependent parameters of central infrastructure. We created business views that brought in all the dependent infrastructure together but still faced the challenge of simulating actual end-user experience.

We then developed application simulators that could schedule the application access at certain periods of the hour and feed the performance numbers (equivalent to typical use case scenarios and keystrokes of the users). This was again interlinked to the various system thresholds like Network, WAN, SAN IO, Virtual platform and ended up with the final VDI session performance tracking. Any deviation in the threshold would highlight the possible causes which are being monitored 24/7 by the NOC team. With this, we have been able to consistently achieve user satisfaction as well as start delivering application performance guarantees to our customers – and free business heads and end-users of their VDI-related fears in the process.

Visit to know more about our latest End-User Computing offerings.


How is VDI performance measured?

The VDI performance is measured as per the following end-user experience metrics.

  • Logon duration: Users expect to access their desktop immediately after they enter the password.
  • App load time: Users are looking for a shorter load time for their apps.
  • App response time: When end-users are working within an application, they don’t want to stop and wait for the application to catch up.
  • Session response time: It is a measure of how well the OS responds to the user input.
  • Graphics quality and responsiveness: Users expect to have the same graphical experience that they would have on a physical desktop.

What is VDI used for?

VDI finds unique applications for the following use cases.

  • Many companies implement VDI as it makes it easy to deploy virtual desktops for their remote workers from a centralized location.
  • VDI can be ideally used in enterprises that work with the BYOD concept and allow their employees to work on their own devices. As processing is done on a centralized server, VDI can be implemented for a wide range of devices while ensuring adherence to security policies. The data is stored on the server, and hence, there is no chance of data loss.
  • In case of task or shift work in organizations like call centers, non persistent VDI can be employed. A large number of employees can use a generic desktop with software that allows them to perform limited and repetitive tasks.

What is VDI as a service?

When VDI is offered as a service, a third-party service provider manages the virtual infrastructure for you. The VDI user experience is offered to end-users along with all the applications necessary for work as a cloud service. The service provider also assumes the responsibility of managing the desktop infrastructure for the end-user thereby ensuring faster software updates, migrations, user provisioning, and ensuring better data security and disaster planning for businesses. Consequently, organizations can ease up their administrative operations and minimize IT-related overheads.

What is VDI, and how does it work?

VDI or Virtual Desktop Infrastructure is a virtualization technology in which virtual machines are used to deliver and manage virtual desktops. VDI separates the OS, applications, and data from the hardware and provides a convenient and affordable desktop solution over a network. The desktop environments are hosted on a centralized server and deployed to the end-user devices on request.

A VDI uses a hypervisor server that runs on physical hosts to create virtual machines. Further, they host virtual desktops that users can access from their devices remotely.

A connection broker is necessary for any desktop environment. The program acts as a single point of operation for managing all the hosted resources and offers end-users login access to their allocated systems. The virtual machines, applications, or physical workstations will be available to the users based on their identity and location of the client device.

VDI can be persistent or nonpersistent. With persistent VDI, users access the same desktop every time they log in. Here, the changes are saved after the connection is reset. On the other hand, non persistent VDI lets users connect to generic desktops. It is used in firms where a customized desktop is not necessary, and the nature of work is limited and repetitive.

It is common knowledge that cloud infrastructure adoption by businesses in India is seeing an encouraging trend. According to a recent study by EMC Corporation and Zinnov Management Consulting, the cloud computing market in India is expected to reach $4.5 billion by 2015, a little more than ten times the existing $400 million market. Also, the same study states that the cloud industry in India alone is expected to create 1 lakh additional jobs by 2015.

Additionally with the rollout of the UID programme, the technology has also found its way into the Government sector. This would lead one to think that cloud infrastructure has entered mainstream and is getting increasingly preferred over the legacy IT model. Yet, industries such as BFSI that have strong regulatory and compliance requirements have understandable reservations about cloud infrastructure.

Per a latest research by ValueNotes on behalf of Anunta, to study the application performance management in the Indian BFSI sector, 76% of the respondents surveyed still have physical delivery architecture. Of them, insurance and financial institutions are open to considering cloud infrastructure in future, but banks appear to be hesitant. The study further found that only 12% of respondents had some applications on cloud. But the core applications were still maintained in physical architecture. Why so? Resistance to change, security concerns, lack of reliable vendors et al were some of the reasons cited for not moving to cloud infrastructure.

But the benefits of cloud infrastructure, in this case for the BFSI sector, far outweigh the concerns. Banks and financial institutions, for many decades, have made use of service bureaus or outsourced core banking platforms. The ever increasing range of cloud computing options provides an opportunity for them to reduce their internal technology footprint and gain access to technology built and operated by third party experts. Also many investment banks and buy-side firms such as hedge fund houses have their private grid infrastructure for functions such as Monte Carlo simulation and risk analysis hosted in a third party data centre. Yet more often than not, they are required to add capacity in a jiffy at critical points. Cloud can prove more than handy at such moments.

Apart from the near halving of costs, applications like customer relationship management (CRM) and risk management can be brought to market relatively quicker. Banks can focus on their core business as opposed to concerning themselves about infrastructure scalability. Not to mention, the disaster recovery issues. I could go on about more advantages in the form of rapid provisioning and scaling of services alongside the chance to go green and contribute to the environment. But you get the point.

It’s not that the financial services industry is completely averse to cloud computing and its charms. In the survey, 43% have opted to take this decision in the next 5 years. Some of the premier private banks in our country have already adopted both private and public cloud, although, most of these banks have only hosted the peripheral applications on the cloud. But that in itself is hopeful.

According to the data collected from various industries, IT/ITeS is the top contributor to the total cloud infrastructure market in India with 19 per cent, followed by Telecom at 18 per cent, BFSI at 15 per cent, manufacturing at 14 per cent and government at 12 per cent. So BFSI has done well. But this post suggests that the data can be bettered.

Anunta tasked one of the research agencies to study the state of application performance management and monitoring in the Indian BFSI sector including banks, AMCs, insurance companies and brokerages. While some of the findings were what we expected, there were some interesting contrasts that the study threw up in terms of what these companies are saying and what the ground realities really are. As a first in a series that we hope to post on this survey, we’ll delve on some of BFSI’s top technology joys and sorrows.

WHAT THEY SAY: The survey found that given the rapid evolution and adoption of new technologies and applications for example, cloud computing, e-payments, mobile payments etc, respondents felt that challenges would only increase. This is a big pain point for 70% of the CTOs/IT heads that we spoke to.

BUT: It’s not something the sector can hide from. Take mobile payments for example – according to RBI figures, the volume and value of funds transferred through national electronic funds transfer (Neft) has been doubling almost every year. In 2010-11, the volume of funds transferred through Neft doubled to 13.23 crore and the value of transactions too doubled to Rs 9,39,149 crore.

Our take: All of the technologies cited above are ones that we believe will drive the future of technology infrastructures. Platforms like e-payments and mobile payments will mean added complexity to the application architecture which is already relatively under-managed at present. What this will mean is a higher level of integration and introducing new ways of monitoring enterprise and applications performance, all the while keeping a strict control on capex and costs.

WHAT THEY SAY: 76% of the respondents said they use automated tools to measure application performance.

BUT: 53% admit that there is a consensus between IT and end user measurement.

Our take: Measuring application performance is not enough. It needs to be measured from an end user perspective by translating technical SLAs into end-user SLAs and then enforcing these across the organization. However a dissonance between the IT organization and end-users on what a good measurement metric is, impedes the process.

WHAT THEY SAY: CTOs understand the importance of end user monitoring and 83% of the respondents measure the performance from end user side.

BUT: The metrics employed, capture end user experience broadly, but do not provide any detailed assessment of performance. They depict a reactive approach towards monitoring and are used mainly for incident reporting. Parameters include response time, application downtime, number of problem tickets and in a large number of cases, just end-user feedback.

Our take: While 83% seems like a healthy number, based on our experience in this sector, we also know that it is based on device level SLAs. There is an urgent need to become proactive in the approach towards end-user experience and issues monitoring. End-user SLAs when combined with technologies such as virtualization, allow SLA defaults and issues to be identified before they occur.

WHAT THEY SAY: It was observed that the loss in employee productivity was measured in terms of No. of volumes/ No. of people/ No. of hours lost due to incidents that cause a dip in application performance. Productivity losses are significant if the issue is unresolved in 30 minutes to 1 hour or when the network is unavailable for the entire day. In those cases, the productivity losses could go up to 30%.

BUT: Almost 56% of respondents agreed that they do not measure the business impact of lower application performance.

Our take: This is one of the biggest bugbears from an application performance management and measurement standpoint. Given that IT departments are often focused on keeping their TCOs low and ensuring system’s are running adequately (not perfectly), what they’ve lost sight of is the fact that IT needs to align to business. A drop in IT/ application uptime can mean significant revenue loss and brand erosion brought about by dissatisfied employees and customers. We’d addressed this issue in one of our earlier blogs but we expect it to remain an issue for a few more years.

It is often said that even if you are successful at server virtualization, it does not mean you can succeed at desktop virtualization. Being an End User Computing specialist that uses desktop virtualization as the underlying technology, we come across at least an enterprise a week that thinks adopting desktop virtualization is just a matter of buying a VDI license and getting an SI to do the job. Another couple of enterprises who are mystified why their POC failed to scale and the worst are the cases where having failed to make it work the IT team has now written off the technology as one “that does not suit us”.

At a practical level, what organizations don’t realize when it comes to static v/s dynamic workloads is that servers are concentrated within the datacentre and IT controls it completely. On the other hand, desktops are distributed and non-standardized. Let’s not forget that as you move users from a personal machine into the perimeter of the datacentre, you leave them with a dumb terminal and effectively less control over their computing environment. Hence, unless you have factored in the overall change management process for users you will encounter problems.

It is absolutely important to know why you’re virtualizing or what is the solution best suited to your requirements. Technology for the sake of simplifying the IT function or even reducing cost is rarely as powerful as directly giving business increased productivity and this is where the end-user becomes critical. Virtualization, for the sake of being cooler only goes to prove that IT is not aligned with business.

A fundamental transformation is needed in VDI management. Organizations need to realize that it’s not desktop management anymore but a cluster of high performing servers with storage systems and this requires a skilled team to manage.

In our experience, it really doesn’t matter whether you’re trying to manage multiple roll outs, as long as you’ve chosen the right partner to help you implement it. In fact, we’ve had the opportunity to implement exactly such a concurrent roll out at Ratnakar Bank and found that we are able to minimize the upheaval and maximize the end-result in terms of increased productivity.

Scaling is critical when it comes to planning your VDI deployment. VDI planning involves right compute, storage, memory, network and application integrations for the right application performance. The sizing for a batch of 150 – 200 users vs 1000 – 2000 users will be dramatically different. The server enclosures, storage models, etc., vary for different scale of requirements and a clearly thought through modular approach has to be adopted. This is also by far one of the most critical junctures at which IT needs to align with business; in being able to map on to business growth projections to plan future ramp ups.

It’s not just the hardware that you’re testing VDI on. It’s your actual application and user environment with diverse possible business scenarios, so you know for sure what the exact sizing (including users/core, IOPs for storage, etc.) is. Many industry benchmarks won’t fit the actual environment. A detailed analysis of the current workload, capacity changes over the weeks, peak volume hours, days, month etc should be considered during this planning.

All things considered, what we can say for sure based on our experience of implementing VDI over the last decade for various corporates is that implementing VDI requires a partner with specific and proven track records. Before selecting a partner, check that and remember having done server virtualization does not count as sufficient track record. We believe that the skills required to manage VDI involve domain experience of virtualization technology & products. Many VDI implementations fail because of a gross underestimation of the skills needed in this domain. It’s probably not a bad idea to bring in an expert.

In our line of business, all too often we come across IT infrastructure heads who say they are currently looking at virtualizing their applications and may look at virtualizing desktops later (if needed). I find this odd because it is not an either or decision.

Let’s first understand what application virtualization is and why organizations invest in it. It is a solution that insulates running programs from the underlying desktop. The idea is to eliminate the configuration problems that conventional desktops are plagued with and that require high support resources. In essence, the interactions between the OS and a given program are virtualized. This is particularly useful when the end-users need a specific set of applications to perform predictable tasks with minimal variance or access controls. A good example would be at the branch of a bank that typically handles a set type and number of average transactions. This however, still leaves a need for personal computing capacity on the desktop.

Compare this with cloud infrastructure and desktop virtualization which takes the entire desktop and puts it in the datacenter (put crudely) with the desktop becoming nothing but a thin client. This enables broader level of access to complex business applications even on the move, and is hence better suited for power users / management.

Clearly therefore, both these technologies serve very different purposes and application virtualization cannot become a proxy for desktop virtualization or vice-versa. If it’s the budget that’s causing you to take this phased approach, then remember, application virtualization maybe cheaper but it is also restricted in the capabilities and visibility that it provides to power users. Deploying it across the organization will mean that VDI will inevitably become necessary when a more sophisticated level interaction with applications is required.

Consequently, the thinking around virtualization needs to change. Enterprises should be looking to use cloud infrastructure power to maximise end-user satisfaction, productivity and revenue generation. Hence, if the end-goal of your cloud infrastructure strategy is to maximise productivity, increase IT efficiency and responsiveness and increase enterprise security, application virtualisation is a step in your cloud adoption journey which eventually has to proceed to user session and desktop virtualization in order to realise your organizational goals.

When planning your investment in Virtual Desktop Infrastructure (VDI) it is important for organizations to should consider: whether they need it, the cost benefit analysis, and what parts of their infrastructure to virtualize, etc. Another interesting aspect to consider is – how the journey of migrating desktops to cloud impacts the application delivery model. As a result, I think it merits taking this conversation one step further to discuss what you should consider from an application delivery standpoint when moving to VDI. Let’s continue our discussion about 7 building blocks of an efficient application delivery model.

1. Identify the value chain: While this seems like a no-brainer, it is important to identify the enterprise value chain right from the data center to the end-point, where the end-user consumes the application.

What this means: With diverse application needs for different categories of users i.e. mobile users, transaction users, quasi mobile users, etc., it demands identifying all the key components in the critical path for each of the user segments.

2. Manageability of applications: Identifying the value-chain needs to be closely followed by manageability at the component level. While data centers and networks are managed as part of keeping-the-lights-on operations, end-points have almost always been monitored and managed only to the extent of provisioning device uptime. For all practical purposes, they have been considered the output mechanism that doesn’t impact application performance. Desktop virtualization is helping here; it allows you to push the envelope for end-to-end manageability of application performance.

What this means: VDI brings manageability to each component of the application delivery value chain; however, manageability paradigms change.

3. Monitoring: Monitoring here is reference to a proactive approach to ensuring optimum application performance to end-users. At present, organizations  monitor performance at the component level without really monitoring the service level that they cumulatively contribute towards.

What this means: This comprehensive monitoring ensures that your systems are geared to proactively identify issues before they hit users. So application delivery is optimized at the end-user level.

4. Skills realignment: Typically enterprise architecture and its components work in silos where every component has a specialist managing and monitoring it. What this often leads to is reporting of optimal component level performance without actually accounting for the cumulative effect of their performance on service levels. So while specialists are necessary, it becomes crucial from a VDI standpoint to have a robust incident diagnostic team to complement it.

What this means: While the end-user never really had a custodian/specialist in charge of its optimal functioning and uptime, the incident diagnostic team becomes the umbrella organization that covers it.

5. Centralization: This is among the most elementary building blocks of VDI from an application delivery standpoint. One that ensures that your support skills are centralized given that the intelligence has moved to a central location and end-points are dumb terminals.

What this means: What this means to application delivery is the ability to manage, monitor and support applications, their uptime/downtime centrally.

6. Security: In a traditional application delivery framework, isolated security measures are created for data centers (perimeter security approach for data in store), networks (encryption for data in transit) and end-points (role based restrictions and controls for data access). VDI provides a lot of inherent security benefits; one of the key advantages is to ensure data doesn’t travel outside of data center. It is important that data doesn’t leave the data center, therefore security controls need to be realigned by creating different zones within the data center.

What this means: What this means to app delivery is ability to design a cohesive security framework for application delivery while saving costs.

7. Network Architecture: Designing network architecture has always been tricky with traditional desktop setups, as it requires planning for peak traffics while ensuring control over peak to average ratio. VDI helps in reducing peak to average ratio drastically and optimize network architecture with deterministic bandwidth requirements. VDI requires carefully planning QoS within your networks – as it replaces a lot of asynchronous network traffic with interactive ones.

What this means: This means a far more efficient and cost effective network that intelligently prioritizes the traffic based on its needs and is far less dependent on traffic loads.

If you’ve got these fundamental 7 pillars addressed, your Virtual Desktop Infrastructure implementation approach can be considered as evolved and would ensure application delivery is optimized to provide the best end-user experience.

Converged Infrastructure (CI) is all the rage and quite a topic of discussion. And whether one calls it “unified computing”, “fabric-based computing”, or “dynamic infrastructure”, there’s no escaping the literature from the hardware community. Simply put, Converged Infrastructure bundles servers, storage, networking devices and software for infrastructure management all contained in one unit.

At this point it might be good to ask, why the need for converged infrastructure? The answer lies in the wide gap between legacy IT stacks and the needs of virtual workloads. As siloed physical storage and network assets lack the optimization to support virtual servers; IT administrators tend to over provision resources. As more and more workloads get virtualized and the data associated with workloads grows, the IT environment cannot keep pace. Installing /loading more hardware only adds more complexity and cost, and doesn’t address the real problem. For example, consider storage. The local dependency between a physical server and storage shifts to a many-to-one relationship between virtual machines and a single storage device. Multiplexing the different I/O streams of multiple workloads by the hypervisor creates random I/O streams that must compete for resources. This increases IOPS required to service the virtual workloads. To address performance issues, IT administrators may add more disk spindles. However, this also increases capacity. This over-provisioning leads to a higher cost per gigabyte of storage allocated to every virtual machine. This is true of other parameters like capacity and mobility as well.

Here is precisely where converged infrastructure helps. Converged Infrastructure allows you to design, build, and maintain segments of the virtualization stack, while supporting an on-demand growth model.

The first wave of CI providers have been vendors with independent legacy server, storage and networking components. They delivered Converged Infrastructure consisting of pre-racked and cabled solutions based on a reference architecture. The next wave of converged infrastructure aggregates compute, storage and SAN functionality into modular appliances based on commodity x86 hardware that scales out by adding appliance “nodes.” Centralized management and policy configuration at the virtual machine level contribute to lower operational costs. This single-vendor converged infrastructure solution also streamlines deployment, and lowers acquisition costs – however even this system did not fully address the performance and capacity issues. It still uses inefficient methods of writing and reading data to and from storage. Nor does it consider the challenges of capturing, storing, transferring and recovering data copies for backup and disaster recovery.

The third wave of CI – HyperConverged Infrastructure (HCI) holds the most promise. It addresses the performance, capacity, mobility, and management issues prevalent in previous waves of converged infrastructure. It achieves VM-centricity by tracking what data belongs to which virtual machine, enabling VM-mobility. By eliminating redundant read and write operations, HCI achieves performance efficiency. It achieves capacity efficiency by reducing the “footprint” of data on production and backup storage via de-duplication, compression and optimization of data at inception. HCI infrastructure promises to dramatically reduce total cost of ownership by eliminating siloed technology, enabling rapid application deployment, reducing labor-intensive activities, preventing over-purchasing and over-provisioning and maximizing the infrastructure investment. The data efficiency introduced with de-duplication, compression and optimization also improves performance.

It helps build a private cloud-computing environment that delivers the capabilities of large cloud service providers within our own IT environment. Because it combines data center components into an appliance form factor, and adds centralized management and the ability to scale – it eliminates compatibility issues typical in a legacy, siloed environment. Furthermore, it streamlines purchasing, deployment and use. Emulating the architecture, processes and practices of the top cloud service providers starts with converged infrastructure, it can reduce complexity, cycle time and costs, thus allowing on premise private cloud services economics to work in IT organization favor.

We must conclude by saying that today though Anunta has created private cloud environments for thousands of desktops for clients in Financial services, manufacturing and Business processing we have not opted for “off-the-shelf” CI products. We have preferred to customize based on the users. Our experience also is, and we speak here as an IT provider to Indian companies in the Indian market, adoption of CI products is low for the following reasons:

  • Cost of the solution is at least 30 – 40% higher than traditional Infra approach
  • CI currently lacks flexibility when it comes to scaling up specific requirements of compute alone or storage alone and this forces clients to look for the total solution
  • CI looks lucrative in green field implementation for small and medium companies, who want to start the VDI Journey in sectors like Training, Healthcare, Manufacturing and Microfinance sector but for the existing infra, re-usability of the storage or network could become a point of concern

Finally we do firmly believe that CI will be the way of adopting to the new age data centers and OEM’s will start aligning themselves to match client expectations on price and flexibility.


Subscribe to our mailing list and get interesting stuff and updates to your email inbox.