Cloud Pak for Business Automation

 View Only

Measuring the unmeasurable - software install experience

By SUNDARI VORUGANTI posted Mon January 04, 2021 12:42 PM

  

When determining the quality of a release, we use software metrics to determine if it’s ready for general availability (GA). But what are software metrics?

A software metric is a measure of a software characteristic which can be quantified.  As you can imagine there are innumerable software characteristics that can be measured, some of them are non-subjective like – number of defects, performance numbers, lines of code and many many more. But we think that some subjective metrics are also important to measure. These you can relate to the emotional response of customers using our products – such as NPS score, or the perception of the quality of the product.

While looking at the quality of a product, the install experience is the starting block. Install is the first step a customer takes in their journey with us. In this age of the iPhone, customers expect and easy install. Simply go to an app store, click on it, it downloads and installs. Granted that enterprise software is more complicated and isn’t one-click, but it should still not require a small army to install the products.

Current state

While testing install experiences, the most common metrics that are collected are the following

Metric

What it is

Version

Version of the software being installed

Number of steps

Number of steps to install the product

Time to install pre-requisites

How long to install the pre-requisites

Time to install the product

How long to install the product

Footprint of the product

Memory limits and requests and CPU limits and requests

Errors encountered

Defects opened

 

But these do not take into account the customers effort to install the software. After all, installing the software is only a means to an end, what the customer really wants is to get value from the product they bought. Encountering a lot of errors gives the perception that the product is buggy, and hard to install. This results in escalations and a negative emotional feeling that we have not tested our product enough and are using the customer as a test bed.

Proposal

We propose the following subset of metrics to help measure customer effort.

Metric

What it is

Quality of documentation

This metric represents how consumable and ease the install documentation is overall.

 

Output: Rating 1 to 5, where 1 is extremely poor, and 5 is perfect

Granularity of process

This metric describes how granular the install process is, that is, are there too many individual manual steps that need to be executed, or is there a small number of coarse grained automations. For example, a product could be installed by executing one shell script that automates the install. The other extreme would be an install process where a large number of commands have to be executed.

 

Output: 1 to 5, where 1 is very fine grained and 5 is very coarse grained.

UI driven versus command line driven

The basic assumption here is that a customer would start out by installing via the UI and the focus is on ease of use. In a real production case, over time, however, customers will want to do installs via automated toolchains. Given the expectation to have operators drive the installs, that implies that installs can be done basically by creating Custom Resources (CR).

 

That means the product is expected to be installable via UI as well as via creating custom resources (kubectl), without the need to flip back and forth during the install.

 

Output: 1 to 5, where 1 is that either UI or command line install is missing, and 5 is both options exist and lead to consistent outcomes.

Ability to claim success

The deployment and orchestration of resources typically happens in an asynchronous way, that is, the install client triggers the creation of resources in a cluster and then returns control to the user. Resources then all appear simultaneously and wait for each other as needed to complete a successful start.

 

Product installs should indicate if and when the install has successfully completed. For example, that can mean a post-install script is provided that checks if all required resources are in a good state. Or documentation can be provided that points out which resource (state) is representing success of the entire install. Or there could be support for an explicit notification of success of failure.

 

Output: 1 to 5, where 1 means little or no indication is given to indicate success, and 5 meaning it is very easy to determine if the install has completed successfully.

Support for troubleshooting in case of issues

An important indicator for the consumability of an install process is the ability to identify errors, correct them and continue towards a successful install. While the details of how this can be achieved differ greatly between Paks, and while it is difficult to predict and expect certain types of problems, there are ways to make it less painful for the user.

 

For example, there could be a pre-req script that indicates missing prerequisites before the actual install process even starts (e.g. missing storage class, not enough capacity, lacking access to image registries that are needed, etc). There can also be focus on documenting how a faulty install process can be redone after an occurring problem has been corrected (e.g. delete a CR and create again).

 

There should also be a troubleshooting section in the documentation that points out common errors and how they can be corrected.

 

Output: 1 to 5, where 1 means very poor or no support in cases of errors, and 5 means existing pre- and post-install scripts, as well as explicit documentation about how to handle problems occurring during install.

 

This is what we are doing in trying to quantify “soft” metrics – metrics that are little more subjective, but impact the reputation of the product and customer experience and adoption. What do you think? Are there any metrics you use that we could benefit from?










​​​​​​
0 comments
13 views

Permalink