Banking on technology, Part 2: End-users could do with some more attention

Banking on technology, Part 2: End-users could do with some more attention

While our last blog discussed the issues that the Indian BFSI sector faces in application performance management, the root of the problem really lies in the flawed approach to measuring it. Most are measuring it at a device level and therefore are satisfied with 97-99% uptime at a server level even if the uptime is much lower at end-user level. The best analogy to this: the health of the patient is fine but the patient is dying!

Let’s take a deep dive into some of the primary measurement methods/ metrics and the circumstances under which they are put to use.

What: Application performance from end user perspective is measured only for critical business applications

Anunta Take: While this sounds okay, when one drills deeper into the study, several related issues come to light:

a. Is Application performance really being measured from an end-user perspective? If true, why do 53% percent respondents see no consensus between IT and end-user? Why is assessment broad not detailed: What this means is that the process is rather unstructured and its ability to provide any real insight is relatively limited. As a result, the process is almost redundant. When it comes to mission critical applications, the need becomes even more intense since they are responsible for driving revenues and ensuring productivity. While the customer facing applications such as core banking or internet portals are no-brainers, what about the ones that provide insight into a bank’s risk exposures or anti-money laundering applications that have much wider implications on the success of the business. The end-users in this case are often the CFO or CEO and while it may be a smaller and less frequent end-user base, its functioning is essential to the health of a bank’s business at a different scale.

b. The metrics are not monitored regularly: Measuring end-user satisfaction is not something that can be done in fits and starts. An initial assessment needs to be acted upon with the relevant fixes being put in place and then reassessed at regular intervals to gauge progress. The BFSI sector has the added level of complexity that ensures that not just internal end-users but an external customer’s user experience needs to be measured as well. As we’ve often stated before, one dissatisfied customer can repel more customers than 10 happy ones can attract.

c. Most of the metrics are around end-user feedback which signifies a reactive approach towards monitoring: I find this revelation about as amusing as it is startling. While end-user feedback is a great mechanism, it often goes unattended to especially by IT departments. This is essentially because, when testing these applications, they’ve ascertained optimal functioning even while the end-user continues to suffer at the hands of badly performing applications. What this also means is that IT SLAs often mean nothing from an end-user standpoint and need a business logic applied to them that ultimately relate to the end-user experience. For example, a link latency SLA from the ISP should be translated to the application behavior SLA in the end-user terminal. This calls for a study of how the application performance changes when the latency fluctuates and how it can create business impact.

What: The metrics around user experience are gathered for incident reporting and problem solving rather than performance improvement. Moreover, these metrics are not linked to business metrics

Anunta Take: This reflects two things: 1) a reactive approach where more often than not, the damage is already done and 2) it does nothing to improve performance and consequently generate revenues.

What organizations and their CIOs often fail to recognize is that while 99% application uptime looks great on paper, that 1% of downtime or brownouts can be expensive. Take for instance 1 minute of downtime across an organization of 5000 customer facing end-users. On average if even 10% of those end-users were working on lead generation/ customer acquisition tasks, that is 500 possible lost customers.

These issues are often exacerbated by the various reasons offered by our respondents for their lack of measuring application performance at the end user-level. We’ll delve on these reasons in our next post.

AUTHOR

Anunta
Anunta

Anunta is an industry-recognized Managed Desktop as a Service provider focused on Enterprise DaaS (Anunta Desktop360), Packaged DaaS, and Digital Workspace technology. We have successfully migrated 600,000+ remote desktop users to the cloud for enhanced workforce productivity and superior end-user experience.

Upcoming Webinar: Security by Design, Security by Default

Register Now