Step-by-Step Explanation
1. Aggregate Statistics:
DTP doesn’t store every single page response time. That would consume huge amounts of memory and CPU, especially during large-scale load tests. Instead, it aggregates statistics (like average, max, etc.) over defined intervals.
2. Precision Reduction:
To make percentile computation efficient, response times are rounded to two significant digits.
- 3,184 ms → 3,100 ms
- 624 ms → 620 ms
- 75 ms remains 75 ms
3. Sorted and Picked:
These reduced-precision values are sorted and used to calculate percentiles like P90, P95, and P99.
4. Reporting 'Rounded Up' Values:
When displaying results, DTP adds back 9s to reflect the upper boundary of the original data range.
- 3,100 → 3,199
- 620 → 699
- 75 → remains 75
Real Examples from Test Reports
Screenshot 1: Percentile Values Ending in .199s
The following image shows a test report where Page Response Time - Percentile 90 consistently ends with .199s, such as 5.199s, 3.199s, etc.
Screenshot 2: Larger Load Test, Same Pattern
Even under a 2,256-user load, the report continues to show P90-P99 values ending in .799s, .899s, etc., validating the same logic.
Is This Approach Accurate?
Yes - for most practical purposes.
In a controlled test with 1,000 response samples (ranging from 1.6 to 8 seconds), the results were:
- 85th and 95th percentiles accurate within 0.5%
- 90th percentile off by only 1.6%
The design offers a smart balance between accuracy and performance efficiency.
Why Not Calculate the Exact Value?
Storing every single response time to calculate exact percentiles would be:
- CPU-intensive
- Memory-heavy
- Unscalable for enterprise-level tests
Instead, DTP uses a performance-friendly trade-off - minimally reducing precision while keeping the insights actionable.
Final Thoughts
So next time you see values like 3.199s or 649 ms, you’ll know it’s not an issue with the calculations; it’s a deliberate, intelligent design choice that:
- Keeps percentile logic accurate
- Reduces memory/CPU overhead
- Supports high-scale performance testing
Pro Tip
If you’re presenting results to stakeholders or tuning performance, it helps to explain this logic so the audience understand that rounding isn’t an error but a technical necessity.