DevOps Automation

DevOps Automation

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Why Do Percentile Values End in 9 or 99 in the Page Performance Report of DevOps Test Performance?

By Ashish Bhaiswar posted 8 hours ago

  
Have you ever noticed how percentile values in the page performance report of DevOps Test Performance (DTP) reports often end with 9, or 99 - like 3,199 ms or 649.0 ms? At first glance, these numbers might seem suspiciously non-random, but there is a well-thought-out technical explanation.
Let’s explore what’s going on behind the scenes, and why this design choice is crucial for performance testing at scale.

What Are Percentiles and Why Do They Matter?
Percentiles help us understand the distribution of response times. For example, the 90th percentile tells us that 90% of all responses were faster than this value. It’s a powerful way to spot outliers and performance bottlenecks.

Why Do These Values Often End in 9 or 99?
The key reason: Percentile values are based on reduced-precision data, and DTP intentionally rounds up to stay true to the definition of percentiles.

Step-by-Step Explanation
1. Aggregate Statistics:
   DTP doesn’t store every single page response time. That would consume huge amounts of memory and CPU, especially during large-scale load tests. Instead, it       aggregates statistics (like average, max, etc.) over defined intervals.
 
2. Precision Reduction:
   To make percentile computation efficient, response times are rounded to two significant digits.
  • 3,184 ms → 3,100 ms
  • 624 ms → 620 ms
  • 75 ms remains 75 ms
3. Sorted and Picked:
   These reduced-precision values are sorted and used to calculate percentiles like P90, P95, and P99.
 
4. Reporting 'Rounded Up' Values:
   When displaying results, DTP adds back 9s to reflect the upper boundary of the original data range.
  • 3,100 → 3,199
  • 620 → 699
  • 75 → remains 75

Real Examples from Test Reports
Screenshot 1: Percentile Values Ending in .199s
The following image shows a test report where Page Response Time - Percentile 90 consistently ends with .199s, such as 5.199s, 3.199s, etc.

Screenshot 2: Larger Load Test, Same Pattern
Even under a 2,256-user load, the report continues to show P90-P99 values ending in .799s, .899s, etc., validating the same logic.
Is This Approach Accurate?
Yes - for most practical purposes.
In a controlled test with 1,000 response samples (ranging from 1.6 to 8 seconds), the results were:
  • 85th and 95th percentiles accurate within 0.5%
  • 90th percentile off by only 1.6%
The design offers a smart balance between accuracy and performance efficiency.

Why Not Calculate the Exact Value?
Storing every single response time to calculate exact percentiles would be:
  • CPU-intensive
  • Memory-heavy
  • Unscalable for enterprise-level tests
Instead, DTP uses a performance-friendly trade-off - minimally reducing precision while keeping the insights actionable.

Final Thoughts
So next time you see values like 3.199s or 649 ms, you’ll know it’s not an issue with the calculations; it’s a deliberate, intelligent design choice that:
  • Keeps percentile logic accurate
  • Reduces memory/CPU overhead
  • Supports high-scale performance testing

Pro Tip
If you’re presenting results to stakeholders or tuning performance, it helps to explain this logic so the audience understand that rounding isn’t an error but a technical necessity.






0 comments
1 view

Permalink