IBM Security Global Forum

 View Only

Advancing Performance Testing Excellence: The Performance Testing Maturity Model

By PARIJAT SHARMA posted Mon August 28, 2023 10:27 AM

  

Advancing Performance Testing Excellence: The Performance Testing Maturity Model

By 

Parijat Sharma (IBM MaaS360) 

Tushar Mehta (IBM Security Verify)

Introduction

A Maturity Model provides insights for refining processes and practices within an organization. Industry today employs various models like the Capability Maturity Model (CMM) by SEI, SOA maturity models, Project Management Maturity model (PMM), etc.

This article introduces the Performance Testing Maturity Model (PTMM). This model emerged from various discussions about performance testing challenges that teams encounter during the Software Development Life Cycle (SDLC). Challenges vary based on factors such as architecture, product maturity, customer base, team size, and more.

The PTMM empowers product teams by understanding and anticipating performance testing challenges in a broader context.

In contrast to industry-wide TMMs like SEI-CMM, the PTMM focuses exclusively on challenges, issues, and processes related to performance testing.

Some of the benefits of PTMM are:

·      Assessment: PTMM provides a framework to assess an organization's or a team’s current performance testing capabilities and practices. Organizations can benchmark themselves against performance testing best practices and identify areas for improvement.

·      Roadmap: PTMM helps teams understand the steps required to enhance their testing processes systematically by providing a roadmap to progress from lower to higher maturity levels.

·      Continuous Improvement: Organizations can gradually evolve their performance testing practices by following the incremental stages outlined in the model.

·      Risk Reduction: PTMM encourages robust testing processes, better identification and mitigation of defects, and reducing the chances of critical issues emerging in the later stages of development or in production.

·      Strategic Decision-Making: PTMM offers insights into an organization's testing strengths and weaknesses.

·      Collaboration: Effective communication is facilitated by standardized practices and shared understanding of testing expectations.

·      Customer Satisfaction: Enhanced testing maturity directly contributes to better product quality.

PTMM Illustration

PTMM Illustration

Maturity Levels

The PTMM presented here consists of four levels, excluding Level 0, where no performance testing is required.

Level 0: Initial

Level 1: Ad hoc

Level 2: Defined

Level 3: Managed and Measured

Level 4: Optimized

Each of these levels is characterized by the following attributes:

Level 0: Initial

At this stage, the organization or testing team is dealing with a new product that targets a relatively small customer base and is likely equipped with essential features aligned with a Minimum Viable Product (MVP) approach. During this phase, the primary testing focus revolves around functional testing aimed at ensuring the core functionality remains operational. Any issues that arise are promptly addressed through quick fixes, sometimes even within the production environment. The scope of testing primarily encompasses unit tests and functional testing suites.

Level 0 is characterized by the following attributes:

·      No process areas are defined.

·      This initial level lacks necessary resources, tools, and dedicated staff.

·      Performance checks are not conducted prior to software delivery.

·      Performance issues are identified and addressed within the production environment

Level 1: Ad hoc

At this level, as the product and development processes evolve to a more mature stage, the establishment of testing and staging environments becomes integral. During this phase, the need for performance testing becomes apparent. Common issues are often linked to customer-reported problems, as well as instances where applications and servers struggle to manage the workflow load, resulting in frequent restarts. When an issue arises, a quick ad hoc performance test is conducted to replicate the problem. Subsequently, the testing is reiterated once the necessary fix is implemented. This iterative process frequently requires multiple cycles of testing, simulating, and fixing. These ad hoc performance tests eventually lay the foundation for the initial tests in a performance testing regression suite, as the organization advances in maturity.

Level 1 is characterized by the following attributes:

·      Performance tests are conducted in the staging environment.

·      The goal is to identify potential bottlenecks.

·      Performance evaluation techniques are ad hoc.

·      The performance test does not encompass all functionalities.

Level 2: Defined

At this stage, the product has witnessed enhancements in both features and its customer base. Essential tools such as basic application performance monitoring (e.g., Grafana, Graphite, AWS CloudWatch), logging frameworks (e.g., ELK), and other tools facilitating monitoring of application servers, middleware, and databases are available to the team. A dedicated team of performance testers administers a comprehensive performance regression test suite at various stages of the software development life cycle (SDLC).

Level 2 is characterized by the following attributes:

·      Performance testing is integrated into the software development life cycle, but the testing is administered by a stand-alone performance testing team after development.

·      Tools are specified for executing tests and measuring performance.

·      Monitoring tools are deployed and utilized.

·      Each functionality undergoes performance evaluation prior to release.

Level 3: Managed and Measured

At this juncture, the process of defining Non-Functional Requirements (NFRs) is firmly established, with NFRs being outlined during the design phase of each feature. The application code incorporates hooks, logging mechanisms, and monitoring tools to facilitate the identification of workflow performance issues across various processing stages. Metrics like Service Level Agreements (SLAs), Service Level Objectives (SLOs), and Service Level Indicators (SLIs) are integrated through the creation of comprehensive dashboards within the monitoring and logging systems. Often, a fully-featured application monitoring system (e.g., Instana, New Relic, Dynatrace) is either already in place or undergoing implementation. This stage signifies the transition from the practice of 'performance testing' to the realm of 'performance engineering'.

       

Level 3 is characterized by the following attributes:

·      Performance requirements are clearly defined before the start of the design phase.

·      All phases of SDLC take performance requirements/goals into consideration.

·      Performance goals are established for each stage and subsequently measured.

·      Each team is responsible for its own performance quality checks and testing before moving to next step of CI/CD pipeline. The individual teams are responsible for meeting the NFR requirements.

·      The performance testing team is responsible for maintaining the overall performance testing suite and executing it in the staging environment. This involves generating production-level load by simulating all application workflows together. Through this process, the team tests and monitors the system's responses and resource utilization.

·      Performance reports are readily available at a common system for entire organization at all times.

·      Performance regression suites are executed and validated for each release.

Level 4: Optimized

At the top level of the PTMM, a well-defined framework manages the expenditure of resources dedicated to performance testing and performance engineering efforts, often tracked through systems like Jira. Performance testing is seamlessly integrated into Continuous Integration/Continuous Deployment (CI/CD) processes, with automated execution and alerting systems in operation to promptly identify and address performance concerns. Abundant resources and knowledge bases are accessible, and a comprehensive understanding of performance processes is widespread across the entire organization, transcending beyond the confines of the performance testing team.

Level 4 is characterized by the following attributes:

·      The effectiveness and expenditure of performance testing are tracked.

·      Performance processes are fine-tuned and continuously enhanced.

·      Performance best practices are shared across the organization.

·      Best practices are adopted to improve current processes.

·      New tools and techniques are enhanced or adopted based on trends in software development.

Goals and Implementation

Derived from the characteristics that organizations experience at each level of the PTMM, the strategic objectives and actionable tasks at each level can be outlined as follows:

Implementing Level 1: Identifying Performance Bottlenecks

Optimal application performance requires addressing potential bottlenecks that hinder seamless functionality. These include issues like memory leaks, high CPU usage, slow database queries, and missing indexes. Additionally, long query execution times and frequent Garbage Collections disrupt workflows. Inadequate resource allocation and misconfigured server policies worsen these bottlenecks. A well-defined performance strategy tackles these issues, fostering a responsive application environment.

·      Create a performance testing team dedicated to such activities.

·      Consider using tools such Apache JMeter, LoadRunner, Neoload, LoadNinja, WebLoad etc. Use profiling and performance analysis tools like Eclipse memory analyzer, Jconsole, jvisualvm, for code performance analysis (including GC).

·      Use OS monitoring tools such as top, htop, vmstat, iostat, sar, nmon, iftop, netstat, etc. for analysis of application server and database host server monitoring.

·      Tools like ASH, AWR, Stats-pack report can be used for DB performance analysis (example given for Oracle RDBMS)   

·      Consider a separate performance testing environment with reasonable level of infrastructure capacity scaled in proportion to the production environment

Implementing Level 2: Establishing Processes and Tools

At this stage, it's crucial to establish a well-structured performance testing practice. Implementing these steps will help achieve this:

·      Define a suite of performance tests

·      Define how often the suite will be run  - after each build, or prior to release, or at a certain intervals like weekly, daily, etc

·      Publish the results in a repository for comparative analysis of future test runs.

·      Include necessary KPIs in test results like CPU usage, Memory Usage, TPM for key requests, Response times and error rates.

·      Since performance tests make a large number of requests to workflows or APIs, include median as well as 90th percentile values of these metrices along with averages.

·      On the tooling front, exploit an in-depth understanding of the performance testing tool's functionalities to craft a comprehensive performance testing framework.

·      This framework shall define:

o   A standardized folder structure for test artifacts

o   A uniform structure for reporting test outcomes

o   A consistent configuration mechanism for specifying parameters like concurrent threads, iterations, durations, and pertinent data for tests

o   Automated triggering mechanisms aimed at minimizing manual effort and facilitating the seamless execution of an expanding array of performance test cases

o   Use a source code management tool (example: GitHub) to keep and maintain tests in a common repository.

Implementing Level 3: Managing and Measuring Performance Test Objectives

At this level, the organization should implement the proverbial "shift-left" paradigm of QA. The non-functional requirements(NFR) should be defined by analysing the feature and its intended usage right at the design time

·      Anticipate the peak workload expected by the solution.

·      Quantify the load expected at constituting components like number of calls to APIs, expected response size, number of messages at the message queues, the data size or number of records in the database tables. The challenge here is that it is tough and at times impossible to arrive at good estimates of these numbers. In such situations, make adequate capacity provisioning of the resources.

·      Anticipate the normal usage patterns, in addition to knowing the peak loads.

·      Consider who will be the largest customers using the workflows, while designing NFRs as well as test strategy.

·      Templatize the structure of a performance test strategy into clear areas –

o   Test case definition

o   Test volumes

o   Test data definition and creation

o   Result analysis

o   Conclusion.

   

Implementing Level 4: Optimizing the Performance Testing Process

At this level, following activities can be taken up to further stream line the performance testing/engineering process

·      For each development item, implement the  NFR definition process through the organizations ticket management system (such as Jira) to ensure that

o   NFRs are defined and documented at design time

o   Test Strategy is defined. This includes definition of performance tests, defining test data required, how it will be created, define what will be monitored, and what will be the criteria to mark a test a pass/fail.

·      Foster collaboration across all organizational levels by coordinating performance practices, observations, SLIs, and SLOs through standard communication channels, including regular meetings, discussions, presentations, and frequent information sharing.

Conclusion

The Performance Testing Maturity Model (PTMM) designed to enhance performance testing practices within organizations. It identifies four maturity levels, and an initial level: Initial, Ad hoc, Defined, Managed and Measured, and Optimized. Each level corresponds to specific attributes and practices. The PTMM aims to help teams assess their performance testing capabilities, create roadmaps for improvement, and optimize testing processes. It offers benefits such as risk reduction, efficiency improvement, and better customer satisfaction. The model's implementation involves identifying performance bottlenecks, establishing processes and tools, managing and measuring performance objectives, and optimizing the testing process through coordination and integration.

In Level 0 (Initial), organizations lack defined processes and resources, focusing on basic functional testing. Level 1 (Ad hoc) introduces ad hoc performance tests to identify bottlenecks, while Level 2 (Defined) integrates performance testing into the SDLC with monitoring tools and comprehensive regression tests. Level 3 (Managed and Measured) focuses on well-defined non-functional requirements, incorporating performance evaluation into all SDLC phases. Finally, Level 4 (Optimized) achieves seamless performance testing integration through resource tracking, process enhancement, and widespread adoption of best practices.

References

https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=11955

https://en.wikipedia.org/wiki/Testing_Maturity_Model

https://www.tmmi.org/tmmi-model/

https://www.tmap.net/building-blocks/test-process-improvement-tpi

0 comments
22 views

Permalink