Subscribe
Author

ANUNTA

Browsing

Traditional desktops are no longer a feasible option for corporate enterprises as they are expensive, difficult to manage, and lack effective cyber-security measures. In addition, they undergo hardware refresh cycles after a few years, require regular patch upgrades and software updates, as well as require highly skilled in-house IT staff to manage the complexities and end-user specific operational challenges. This increases the IT costs considerably for most enterprises. Apart from that, the modern-day demands of the millennial workforce for anytime, anywhere, and any device computing at the enterprise-level further escalates the challenges.

To address these challenges and drive workforce productivity, Anunta brings you an effective ready-to-use desktop solution, i.e. Managed Desktop-as-a-Service (DaaS), hosted on the powerful Azure Cloud.

Anunta’s Fully-managed DaaS solution on the Microsoft Azure Cloud platform offers unmatched flexibility in managing your desktop environment. With a complete focus on end-user computing, enhanced cyber-security, and lower costs, this efficient cloud solution is a best-fit for enterprises owning a traditional computing infrastructure.

Anunta’s Managed DaaS-on-Azure provides complete end-to-end management of your computing ecosystem, which includes build, configure, manage, and store functions to facilitate a smooth migration and amplify high availability- by adhering to a pay-as-you-go business model, this service is well-equipped to meet the seasonal peak loads and scalability requirements of all enterprise-types.

The Managed DaaS cloud solution provides better business value compared to most standard DaaS solutions, which provide only software and hardware components with basic support; instead, it takes complete ownership of provisioning, configuration, integration, and management of virtual desktops, ensuring a steady state of operations on the robust Azure Cloud platform.

Combining the intuitive elements of Design & Consulting Services, Onboarding & Implementation processes, and End-to-End Support, Anunta’s Managed DaaS-on-Azure delivers a streamlined end-user experience and drives the workforce productivity significantly.

Onboarding & Implementation processes include-

  • Azure Subscription readiness
  • Active Directory Integration
  • Golden Image creation & Profile creation
  • User Provisioning
  • Installing Custom applications
  • Adv. features like UEM, VIDM integration, etc.
  • Peripheral Integration
  • Project Management

End to End Support include-

  • Highly-skilled Help Desk
  • Smart Monitoring & Management

Available in both Shared & Dedicated options, this solution offers a storage capacity of 10GB on Windows Server 2016 and RDS CAL as the OS and Access license. This solution is suitable for task workers, and virtual desktops can be accessed over the internet and are available 6 days-a-week from 8 am to 8 pm.

The minimum number of users for the Shared Option is 10 while that of the Dedicated Option is 200. Microsoft O365 suite is available as an add-on depending on customer requests or requirements.

Benefits offered by Anunta’s Managed DaaS-on-Azure solution include-

  • Built on highly scalable & powerful Azure platform with Simplified & Centralized management
  • Pay-as-you-go model with Zero upfront cost
  • Lower TCO due to no Maintenance overhead
  • Reduced requirement for highly-skilled IT workforce & manpower resources
  • End-to-End Implementation & Integration with available resources
  • No compromise on Security, Performance, & Compliance
  • Single Point of Accountability
  • Shared & Dedicated Flexibility with reasonable pricing
  • Windows Virtual Desktop (WVD) Upgrade facility (including multi-user WVD solely on Azure)
  • Compatibility with MS Office 365 suite on a single license
  • Cost-effective Back-up & Disaster Recovery

To learn more about Anunta’s Managed DaaS-on-Azure Cloud solution, visit: 

For sales inquiries, feel free to connect with us at: sales@anuntatech.com

The workplace has evolved. Employees now prefer anytime, anywhere, any device access to enterprise data and application. With changing end-user demand and dynamic business environment, it is time enterprises have a relook at their end-user computing (EUC) strategy to drive workforce productivity and enhance end-user experience.

The increase in number of diverse computing devices with multiple operating systems, continuous application upgrades, transitioning from windows 7 to 10, availability of digital collaboration platforms, and ensuring seamless applications availability for end-users have become more challenging and complex for IT teams. Hardware refresh further complicates enterprise computing environment with large Capex outlay.

Desktop virtualization technologies delivered asVDI (virtual desktop infrastructure) or DaaS(Desktop as a Service) promises seamless mobility and secure availability of data and applications – addressing key challenges in delivering seamless end-user experience. Enterprises with in-house or collocated data centers often choose the VDI way of virtual desktops as it offers full control over hardware, software and data minimizing the threat of data loss and device theft. However, VDI deployments require upfront Capex and specialized skills to deploy and manage virtual desktops making it an extremely challenging proposition for IT teams.

With large scale adoption of Cloud technology, enterprises now have the flexibility to implement Cloud-hosted desktops delivered as DaaS. DaaS converts Capex spend into Opex and negates the cost of owning specialized technical skills, desktop maintenance and infrastructure depreciation.

Gartner’s Market Guide for DaaS, 2016 estimates that by 2019, 50 percent of new VDI users will be deployed on DaaS platforms.

Let us outline 5 reasons why enterprises will benefit by moving virtual desktops to the Cloud.

Lower Total Cost of Ownership (TCO) – Available as ‘pay-per-use’ model, Cloud converts the upfront Capex (investment required in setting up infrastructure and software licences) into Opex. Also, the effort, time and investment in managing, securing, and upgrading physical desktops are minimized, thereby bringing down the TCO. Fewer onsite support staff, fewer support calls and tickets, reduced cost of application delivery and overall better utilization of resources further reduce the TCO.

Anytime, Anywhere, Any Device Computing – Cloud promises the availability of applications and data to end-users anywhere, anytime and across any device. This addresses the increase in demand for on-the-go workplace and simplify operations for remote workers, making applications available even at low bandwidth in remote locations. In addition to flexibility of being able to access data and application at will, today’s workforce also wants to use devices of their choice (BYOD). Cloud-based DaaS helps with implementation of uniform security protocols across end-user devices, thereby ensuring fully secure and compliant access to corporate data and applications.

Security – Centralized desktop management enables IT teams to make available secure and policy-based access to data and applications. This means enterprises can provide selective access to data and application, and curb end-users from downloading multiple application versions or unnecessary third-party applications, thereby ensuring compliance.

Scalability and Flexibility – Cloud offers the much-needed agility and scalability to ramp up and ramp down workloads as and when needed. This simplifies the tasks of setting up backend infrastructure, implementation and management of virtualized environments. Many DaaS on Cloud providers also take complete ownership of managing and ensuring a secure cloud environment, thus allowing enterprises to not burden themselves with maintaining skilled technical resources.

Business Continuity and Disaster Recovery – DaaS mitigates the challenges in ensuring business continuity and disaster recovery for virtualized desktops on Cloud. In case of any disaster, Cloud-hosted desktops not only ensure all data and applications are safe and secure, but also allow for instant infrastructure duplication at any location of choice, thereby maintaining business continuity.

As enterprises adopt Cloud-based desktops, they need to choose a partner that enables seamless end user computing (EUC) transformation. As a recognized specialist in Cloud and desktop virtualization technologies, Anunta manages over 80,000 end-points for more than 120,000 users globally with 99.98% application availability. Anunta works with global enterprises in successfully designing, implementing and managing EUC environment for enterprises delivering un-matched end-user experience. Our focus on unmatched end-user experience management and operational flexibility makes us the partner of choice for implementing and managing DaaS on Cloud for global enterprises.

Learn more about Anunta’s EUC Transformation Services at: anuntatech https://www.anuntatech.com

The hype around cloud computing has probably been among the most persistent and long enduring ‘next-killer-thing’ kind of conversations for a while now. Try as you might, you can’t get away without having a ‘cloud strategy’ in place. But like one CTO pointed out on LinkedIn, “Yes and we have had one in place for years. It is only marketeers who have suddenly ‘discovered’ the cloud”. And rightly so, marketers can’t be left to discover technologies or create spin around what will drive adoption. Technology adoption drivers need to be grounded in a sound business case and logic.

It is for this very reason that I find it baffling as to what CIOs/ CTOs will typically cite as drivers for cloud adoption. The focus is almost always on lowering the TCO (typical marketing spiel) which we at Anunta have spoken about before and believe is flawed in today’s context. Let’s look at some of the commonly mentioned cloud drivers.

All of these drivers are definitely reasons to look at the cloud but I also think that it misses the point to some extent. Cloud is fundamentally an application delivery model as were the MSP and ASP models of old. So as mentioned in our earlier posts, end-user experience is the imperative and application delivery becomes the means to that end. Consequently, cloud is the means to optimal application delivery. In fact, this holds true for physical on-premises infrastructure as well where if application delivery is considered critical and central to the enterprise infrastructure, then every other components of the infrastructure need to be aligned to facilitate it.

Once IT has this basic premise covered, the conventional drivers of cloud adoption begin to hold true. In that, costs become variable (pay as you go),  scalability is instantaneous and consumerization of IT becomes truly viable.

While our last blog discussed the issues that the Indian BFSI sector faces in application performance management, the root of the problem really lies in the flawed approach to measuring it. Most are measuring it at a device level and therefore are satisfied with 97-99% uptime at a server level even if the uptime is much lower at end-user level. The best analogy to this: the health of the patient is fine but the patient is dying!

Let’s take a deep dive into some of the primary measurement methods/ metrics and the circumstances under which they are put to use.

What: Application performance from end user perspective is measured only for critical business applications

Anunta Take: While this sounds okay, when one drills deeper into the study, several related issues come to light:

a. Is Application performance really being measured from an end-user perspective? If true, why do 53% percent respondents see no consensus between IT and end-user? Why is assessment broad not detailed: What this means is that the process is rather unstructured and its ability to provide any real insight is relatively limited. As a result, the process is almost redundant. When it comes to mission critical applications, the need becomes even more intense since they are responsible for driving revenues and ensuring productivity. While the customer facing applications such as core banking or internet portals are no-brainers, what about the ones that provide insight into a bank’s risk exposures or anti-money laundering applications that have much wider implications on the success of the business. The end-users in this case are often the CFO or CEO and while it may be a smaller and less frequent end-user base, its functioning is essential to the health of a bank’s business at a different scale.

b. The metrics are not monitored regularly: Measuring end-user satisfaction is not something that can be done in fits and starts. An initial assessment needs to be acted upon with the relevant fixes being put in place and then reassessed at regular intervals to gauge progress. The BFSI sector has the added level of complexity that ensures that not just internal end-users but an external customer’s user experience needs to be measured as well. As we’ve often stated before, one dissatisfied customer can repel more customers than 10 happy ones can attract.

c. Most of the metrics are around end-user feedback which signifies a reactive approach towards monitoring: I find this revelation about as amusing as it is startling. While end-user feedback is a great mechanism, it often goes unattended to especially by IT departments. This is essentially because, when testing these applications, they’ve ascertained optimal functioning even while the end-user continues to suffer at the hands of badly performing applications. What this also means is that IT SLAs often mean nothing from an end-user standpoint and need a business logic applied to them that ultimately relate to the end-user experience. For example, a link latency SLA from the ISP should be translated to the application behavior SLA in the end-user terminal. This calls for a study of how the application performance changes when the latency fluctuates and how it can create business impact.

What: The metrics around user experience are gathered for incident reporting and problem solving rather than performance improvement. Moreover, these metrics are not linked to business metrics

Anunta Take: This reflects two things: 1) a reactive approach where more often than not, the damage is already done and 2) it does nothing to improve performance and consequently generate revenues.

What organizations and their CIOs often fail to recognize is that while 99% application uptime looks great on paper, that 1% of downtime or brownouts can be expensive. Take for instance 1 minute of downtime across an organization of 5000 customer facing end-users. On average if even 10% of those end-users were working on lead generation/ customer acquisition tasks, that is 500 possible lost customers.

These issues are often exacerbated by the various reasons offered by our respondents for their lack of measuring application performance at the end user-level. We’ll delve on these reasons in our next post.

It has been well-documented that monitoring end user experience is critical. We touched upon this in our last blog. After all, the functioning of the delivery architecture is successful if the end user can complete his/her task smoothly. But as with every result, the means to achieve it matter too. In our fact finding study of the Indian BFSI sector, we found that most respondents did not measure end user performance comprehensively. One of the key factors they highlighted was the extra costs involved.

Measuring the end-user experience requires the IT team to deploy specialized end user monitoring tools. This incurs extra cost in terms of the tool purchase, bringing on board additional resources for monitoring and therefore, their training.

This is compounded by the issue of geographical spread as BFSI organizations widen their network from urban to semi-urban and rural areas. When applications are deployed to these regions, there is a lot of variance in the performance measures. This is because different internet service providers offer different service standards. Monitoring distributed applications encompasses a large and changing set of users, applications, types of measurements, and platforms, adding to the cost element.

Now add to this the interesting views put forward in Gartner’s Magic Quadrant for Application Performance Monitoring (APM) for this year. The report states that applications have become far more difficult to monitor owing to architectures, in general, becoming more modular, redundant, distributed and dynamic. This, in turn, changes the application code more frequently. The resultant web of complexities renders the traditional system/monitoring tools practically, useless. You can’t help but sympathize in a situation like this.

But I have to point out here that tools aren’t the only answer. Measuring application performance from an end-user perspective has a lot more to do with the way they are delivered. The system architecture and how it can be optimized to deliver applications and measure their performance. This is where technologies like VDI come handy. Not just because it creates a standardized operating environment and delivery architecture but also because it enables an organization to put in place enforceable SLAs that can be easily translated into end-user SLAs. In essence, a technology layer is optimized to deliver end-user intelligence by a process layer that defines the softer aspects.

In any organization, business is driven at two levels i.e. the strategic (within boardrooms) and the execution (by the end-user). That said, our recent study of APM in the Indian BFSI sector tells us that these business strategies being devised in corporate offices often fail to reach the end user and given that they are not involved in the process, expectations and perceptions often differ. However, between the two are a number of departments that are expected to help end-users achieve those business goals. IT is one such department and in today’s connected environment, probably infinitely more important than HR or operations. It is at this juncture that I’d like to draw attention to an interesting question recently asked by a consultant on LinkedIn – Why is it so hard to get IT departments to be more engaged in the execution of corporate strategic goals? Our Chief Operating Officer – Sivakumar Ramamurthy – answered this rather aptly.

  • Look at how applications are performing at the end-user level as compared to the broad enterprise network level
  • Translate business goals into actionable end-user metrics that make it easy to spot when an end-user is having trouble – this again draws on the fact that there isn’t always a clear direction from business to begin with

In essence, his views capture the state of IT departments across industries today. There is a desire to be considered a business partner but an understandable inability to THINK business. I don’t think that one can place blame here given that when IT first made its debut into everyday functioning, it’s aim had been to simply automate and later moved onto the higher goal of increasing productivity while lowering costs. It is this second stage beyond which most IT departments have not been able to move. This is an indication of the fact that while business leaders believe that IT is a boardroom subject, it still tends to be marginalized as a support function that is focused on saving costs.

With that kind of direction, it is not surprising that IT departments haven’t been able to scale up the way business expects it to. As a result, they’ve come to believe that their technical knowledge protects them even as things like the consumerization of IT have become a reality. Much like doctors had believed they were the final word until the internet opened up medical information to patients.

So this explains the business-IT disconnect, but what about the IT-end user disconnect? Our survey reveals that 53% respondents thought IT and end-users were always at odds with each other. What this says is that while end-users demand 100% uptime, IT is unable to deliver on this given its inability to do two things:
What you’re left with is three silos i.e. business, IT and end-users that don’t really speak with each other, understand each other and, look at each other as co-dependents and therefore, do not align with each other. So while business is looking to up EBITDA margins, IT thinks it can do this by reducing the TCO of its IT investments, all the while not really considering that if it can increase application uptime and reduce its cost of application delivery, then the end-user may actually become more capable of delivering on the business goals and IT can go from being an enabler to a revenue generator.

According to Brian Madden,  VDI is not the silver bullet folks expect it to be. The two major misconceptions highlighted being:

  • With desktop virtualization one can avoid managing windows desktops
  • With desktop virtualization, you virtualize the apps and virtualize the user environment, and then there’s nothing left to manage

Brian further explains how desktop virtualization is inextricably linked to Windows 7.

A lot has been said about the challenges or myths about VDI and conclusions are being made basis these. While these discussions kindle a constructive thought, they also scare away new users by detailing one complexity after another. Here’s our take on them:

First of all, the organization should be ready for a real transformation if VDI has to be adopted. If the intention is to manage everything as it’s being managed currently then most of the challenges being talked about on blogs and online forums will be true. The fundamental change is that VDI is moving control from the end-point to the datacenter.

Traditionally, a lot of discipline has been adopted in datacenter management as most of the control lies with the IT team. Few years ago several blogs spoke about how the virtual server concept would fail and never take off. Questions were raised about hardware being shared, driver issues, memory allocation, storage, etc. Today, nobody questions server virtualization capabilities; almost every organization has attempted it or is using it on a large scale. Also, comparing speed of adoption of server virtualization to that of desktop virtualization is incorrect. Desktops are tightly integrated with end-users. More than technology, it’s a perception play and organizations should be ready to embrace it.

When we adopted this solution earlier, we faced questions about the cost effectiveness of VDI (which was not seen as optimal), ease of management, etc., but realized that we were trying to compare VDI to the bottom most layer of the desktop and not looking at it as a broader solution which can deliver much more than the existing desktop back then. Speaking of compliance and security, many Desktop IT teams are struggling to manage tough compliance requirements, facing audit after audit forcing them to streamline the end-point solution, protect critical data on desktops, complicated policies and scripts. The way out for these are stop-gap solutions or deploying enterprise-wide complex applications which would be used only for addressing about 5 – 10% of the issues they were supposed to take care of.

The effort and investment needed for these are not attributed to desktop costs, rather they all become part of the information security budget. Isn’t it logical to say that the current desktop is not capable of protecting itself and hence we need to look out for solutions? If yes, then why are these costs not attributed to desktop costs? On the contrary, migrating to VDI brings about 70-80% of compliance without intervention of any additional application or technology. Are we consciously crediting VDI for this? Great desktop management tools and solutions do exist today. But even then,  the need to manage each end-points still persists. The accuracy of patching, achieving standardization in the hardware/software configuration, application rollout is not an easy task for desktop engineers. VDI brings down this complexity and masks the hardware variation and provides a wonderful application layer that is completely standardized. While patching is still needed in VDI, it does reduce the volume of patching by using the right templates.

VDI management is about managing 1 desktop vis-à-vis 500 desktops. If enough time is spent in designing and planning then the manageability of VDI can be lot simpler than actual desktops. At times, IT teams are challenged about their so called obsession with “VDI” and trying to make it work in whatever form. The answer is ‘No’ , because the audience you’re going to face is end-users and they are smart enough to know what works best for them. The concept of VDI is not new, the logic behind sharing a common infrastructure platform has been around for many years. The evolution of many such technologies like client server architecture, terminal services, application virtualization, etc., are driving the single point agenda of how effectively one can deliver application to the end-users.

We should continue to look at solutions that deliver application to end-users using various methods and tools. Also, VDI shouldn’t be compared to replacing a desktop but the complete chain of things which contribute towards end-user experience management (EUEM).

End-user performance management is very critical to making VDI a successful initiative. From an end-user standpoint, the user is looking for maximum efficiency and is not concerned about HOW that is achieved or WHAT technology is used. Just like how a mobile phone user does not care about whether his phone uses GSM or CDMA technology as long as it solves its intended purpose.

Frequently, business heads and teams resist VDI based on the fact that the familiar box near them has been taken away. We saw a lot of resistance when we rolled out VDI a couple of years ago, but we found a solution to prove and measure its performance. Eventually, we made these performance metrics available for all to see so that new users who challenge VDI have reliable data to refer to.

The approach we have adopted is a combination of technology and processes. Our monitoring architecture started from the end-user application metrics and moved up the layer to the actual VDI in the data center (contrary to the traditional approach of just looking at performance counters). With this approach, we were able to easily relate the application performance at the end-user level to the dependent parameters of central infrastructure. We created business views that brought in all the dependent infrastructure together but still faced the challenge of simulating actual end-user experience.

We then developed application simulators that could schedule the application access at certain periods of the hour and feed the performance numbers (equivalent to typical use case scenarios and keystrokes of the users). This was again interlinked to the various system thresholds like Network, WAN, SAN IO, Virtual platform and ended up with the final VDI session performance tracking. Any deviation in the threshold would highlight the possible causes which are being monitored 24/7 by the NOC team. With this, we have been able to consistently achieve user satisfaction as well as start delivering application performance guarantees to our customers – and free business heads and end-users of their VDI-related fears in the process.

Visit www.anuntatech.com to know more about our latest End-User Computing offerings.

FAQs

How is VDI performance measured?

The VDI performance is measured as per the following end-user experience metrics.

  • Logon duration: Users expect to access their desktop immediately after they enter the password.
  • App load time: Users are looking for a shorter load time for their apps.
  • App response time: When end-users are working within an application, they don’t want to stop and wait for the application to catch up.
  • Session response time: It is a measure of how well the OS responds to the user input.
  • Graphics quality and responsiveness: Users expect to have the same graphical experience that they would have on a physical desktop.

What is VDI used for?

VDI finds unique applications for the following use cases.

  • Many companies implement VDI as it makes it easy to deploy virtual desktops for their remote workers from a centralized location.
  • VDI can be ideally used in enterprises that work with the BYOD concept and allow their employees to work on their own devices. As processing is done on a centralized server, VDI can be implemented for a wide range of devices while ensuring adherence to security policies. The data is stored on the server, and hence, there is no chance of data loss.
  • In case of task or shift work in organizations like call centers, non persistent VDI can be employed. A large number of employees can use a generic desktop with software that allows them to perform limited and repetitive tasks.

What is VDI as a service?

When VDI is offered as a service, a third-party service provider manages the virtual infrastructure for you. The VDI user experience is offered to end-users along with all the applications necessary for work as a cloud service. The service provider also assumes the responsibility of managing the desktop infrastructure for the end-user thereby ensuring faster software updates, migrations, user provisioning, and ensuring better data security and disaster planning for businesses. Consequently, organizations can ease up their administrative operations and minimize IT-related overheads.

What is VDI, and how does it work?

VDI or Virtual Desktop Infrastructure is a virtualization technology in which virtual machines are used to deliver and manage virtual desktops. VDI separates the OS, applications, and data from the hardware and provides a convenient and affordable desktop solution over a network. The desktop environments are hosted on a centralized server and deployed to the end-user devices on request.

A VDI uses a hypervisor server that runs on physical hosts to create virtual machines. Further, they host virtual desktops that users can access from their devices remotely.

A connection broker is necessary for any desktop environment. The program acts as a single point of operation for managing all the hosted resources and offers end-users login access to their allocated systems. The virtual machines, applications, or physical workstations will be available to the users based on their identity and location of the client device.

VDI can be persistent or nonpersistent. With persistent VDI, users access the same desktop every time they log in. Here, the changes are saved after the connection is reset. On the other hand, non persistent VDI lets users connect to generic desktops. It is used in firms where a customized desktop is not necessary, and the nature of work is limited and repetitive.

It is common knowledge that cloud infrastructure adoption by businesses in India is seeing an encouraging trend. According to a recent study by EMC Corporation and Zinnov Management Consulting, the cloud computing market in India is expected to reach $4.5 billion by 2015, a little more than ten times the existing $400 million market. Also, the same study states that the cloud industry in India alone is expected to create 1 lakh additional jobs by 2015.

Additionally with the rollout of the UID programme, the technology has also found its way into the Government sector. This would lead one to think that cloud infrastructure has entered mainstream and is getting increasingly preferred over the legacy IT model. Yet, industries such as BFSI that have strong regulatory and compliance requirements have understandable reservations about cloud infrastructure.

Per a latest research by ValueNotes on behalf of Anunta, to study the application performance management in the Indian BFSI sector, 76% of the respondents surveyed still have physical delivery architecture. Of them, insurance and financial institutions are open to considering cloud infrastructure in future, but banks appear to be hesitant. The study further found that only 12% of respondents had some applications on cloud. But the core applications were still maintained in physical architecture. Why so? Resistance to change, security concerns, lack of reliable vendors et al were some of the reasons cited for not moving to cloud infrastructure.

But the benefits of cloud infrastructure, in this case for the BFSI sector, far outweigh the concerns. Banks and financial institutions, for many decades, have made use of service bureaus or outsourced core banking platforms. The ever increasing range of cloud computing options provides an opportunity for them to reduce their internal technology footprint and gain access to technology built and operated by third party experts. Also many investment banks and buy-side firms such as hedge fund houses have their private grid infrastructure for functions such as Monte Carlo simulation and risk analysis hosted in a third party data centre. Yet more often than not, they are required to add capacity in a jiffy at critical points. Cloud can prove more than handy at such moments.

Apart from the near halving of costs, applications like customer relationship management (CRM) and risk management can be brought to market relatively quicker. Banks can focus on their core business as opposed to concerning themselves about infrastructure scalability. Not to mention, the disaster recovery issues. I could go on about more advantages in the form of rapid provisioning and scaling of services alongside the chance to go green and contribute to the environment. But you get the point.

It’s not that the financial services industry is completely averse to cloud computing and its charms. In the survey, 43% have opted to take this decision in the next 5 years. Some of the premier private banks in our country have already adopted both private and public cloud, although, most of these banks have only hosted the peripheral applications on the cloud. But that in itself is hopeful.

According to the data collected from various industries, IT/ITeS is the top contributor to the total cloud infrastructure market in India with 19 per cent, followed by Telecom at 18 per cent, BFSI at 15 per cent, manufacturing at 14 per cent and government at 12 per cent. So BFSI has done well. But this post suggests that the data can be bettered.

Anunta tasked one of the research agencies to study the state of application performance management and monitoring in the Indian BFSI sector including banks, AMCs, insurance companies and brokerages. While some of the findings were what we expected, there were some interesting contrasts that the study threw up in terms of what these companies are saying and what the ground realities really are. As a first in a series that we hope to post on this survey, we’ll delve on some of BFSI’s top technology joys and sorrows.

WHAT THEY SAY: The survey found that given the rapid evolution and adoption of new technologies and applications for example, cloud computing, e-payments, mobile payments etc, respondents felt that challenges would only increase. This is a big pain point for 70% of the CTOs/IT heads that we spoke to.

BUT: It’s not something the sector can hide from. Take mobile payments for example – according to RBI figures, the volume and value of funds transferred through national electronic funds transfer (Neft) has been doubling almost every year. In 2010-11, the volume of funds transferred through Neft doubled to 13.23 crore and the value of transactions too doubled to Rs 9,39,149 crore.

Our take: All of the technologies cited above are ones that we believe will drive the future of technology infrastructures. Platforms like e-payments and mobile payments will mean added complexity to the application architecture which is already relatively under-managed at present. What this will mean is a higher level of integration and introducing new ways of monitoring enterprise and applications performance, all the while keeping a strict control on capex and costs.

WHAT THEY SAY: 76% of the respondents said they use automated tools to measure application performance.

BUT: 53% admit that there is a consensus between IT and end user measurement.

Our take: Measuring application performance is not enough. It needs to be measured from an end user perspective by translating technical SLAs into end-user SLAs and then enforcing these across the organization. However a dissonance between the IT organization and end-users on what a good measurement metric is, impedes the process.

WHAT THEY SAY: CTOs understand the importance of end user monitoring and 83% of the respondents measure the performance from end user side.

BUT: The metrics employed, capture end user experience broadly, but do not provide any detailed assessment of performance. They depict a reactive approach towards monitoring and are used mainly for incident reporting. Parameters include response time, application downtime, number of problem tickets and in a large number of cases, just end-user feedback.

Our take: While 83% seems like a healthy number, based on our experience in this sector, we also know that it is based on device level SLAs. There is an urgent need to become proactive in the approach towards end-user experience and issues monitoring. End-user SLAs when combined with technologies such as virtualization, allow SLA defaults and issues to be identified before they occur.

WHAT THEY SAY: It was observed that the loss in employee productivity was measured in terms of No. of volumes/ No. of people/ No. of hours lost due to incidents that cause a dip in application performance. Productivity losses are significant if the issue is unresolved in 30 minutes to 1 hour or when the network is unavailable for the entire day. In those cases, the productivity losses could go up to 30%.

BUT: Almost 56% of respondents agreed that they do not measure the business impact of lower application performance.

Our take: This is one of the biggest bugbears from an application performance management and measurement standpoint. Given that IT departments are often focused on keeping their TCOs low and ensuring system’s are running adequately (not perfectly), what they’ve lost sight of is the fact that IT needs to align to business. A drop in IT/ application uptime can mean significant revenue loss and brand erosion brought about by dissatisfied employees and customers. We’d addressed this issue in one of our earlier blogs but we expect it to remain an issue for a few more years.

SUBSCRIBE TO OUR BLOG

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.